This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-14 19:15
Elapsed46m27s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 127 lines ...
I0914 19:16:14.853980    4066 up.go:43] Cleaning up any leaked resources from previous cluster
I0914 19:16:14.854046    4066 dumplogs.go:38] /logs/artifacts/feaf99d6-158f-11ec-a66e-42e963362337/kops toolbox dump --name e2e-c4ce364831-62691.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I0914 19:16:14.908143    4085 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0914 19:16:14.908382    4085 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-c4ce364831-62691.test-cncf-aws.k8s.io" not found
W0914 19:16:15.546078    4066 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0914 19:16:15.546147    4066 down.go:48] /logs/artifacts/feaf99d6-158f-11ec-a66e-42e963362337/kops delete cluster --name e2e-c4ce364831-62691.test-cncf-aws.k8s.io --yes
I0914 19:16:15.567279    4095 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0914 19:16:15.567401    4095 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-c4ce364831-62691.test-cncf-aws.k8s.io" not found
I0914 19:16:16.444564    4066 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/14 19:16:16 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0914 19:16:16.454795    4066 http.go:37] curl https://ip.jsb.workers.dev
I0914 19:16:16.582173    4066 up.go:144] /logs/artifacts/feaf99d6-158f-11ec-a66e-42e963362337/kops create cluster --name e2e-c4ce364831-62691.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.4 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2905.2.3-hvm --channel=alpha --networking=kopeio --container-runtime=containerd --admin-access 34.134.0.147/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I0914 19:16:16.603724    4106 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0914 19:16:16.603884    4106 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0914 19:16:16.673770    4106 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0914 19:16:17.375440    4106 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0914 19:16:46.662234    4066 up.go:181] /logs/artifacts/feaf99d6-158f-11ec-a66e-42e963362337/kops validate cluster --name e2e-c4ce364831-62691.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I0914 19:16:46.681378    4124 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0914 19:16:46.681506    4124 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-c4ce364831-62691.test-cncf-aws.k8s.io

W0914 19:16:48.206653    4124 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:16:58.241536    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:08.277938    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:18.310609    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:28.363183    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:38.399980    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:48.449995    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:17:58.536406    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:08.572333    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:18.603264    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:28.639557    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:38.688279    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:48.721772    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:18:58.757372    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:08.791035    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:18.843842    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:28.874239    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:38.913751    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:48.960838    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:19:58.999201    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:20:09.027888    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:20:19.073712    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:20:29.661866    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0914 19:20:39.696996    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 7 lines ...
Machine	i-05d276943891cd348				machine "i-05d276943891cd348" has not yet joined cluster
Machine	i-08b1db505a6ff0626				machine "i-08b1db505a6ff0626" has not yet joined cluster
Machine	i-0dd97304ea8ca0263				machine "i-0dd97304ea8ca0263" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-dhdhf		system-cluster-critical pod "coredns-5dc785954d-dhdhf" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-4pjqq	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-4pjqq" is pending

Validation Failed
W0914 19:20:53.404193    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 9 lines ...
Machine	i-0dd97304ea8ca0263				machine "i-0dd97304ea8ca0263" has not yet joined cluster
Node	ip-172-20-41-171.sa-east-1.compute.internal	node "ip-172-20-41-171.sa-east-1.compute.internal" of role "node" is not ready
Node	ip-172-20-50-202.sa-east-1.compute.internal	node "ip-172-20-50-202.sa-east-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-dhdhf		system-cluster-critical pod "coredns-5dc785954d-dhdhf" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-4pjqq	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-4pjqq" is pending

Validation Failed
W0914 19:21:05.873362    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 10 lines ...
Node	ip-172-20-48-74.sa-east-1.compute.internal				node "ip-172-20-48-74.sa-east-1.compute.internal" of role "node" is not ready
Node	ip-172-20-48-93.sa-east-1.compute.internal				node "ip-172-20-48-93.sa-east-1.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-dhdhf					system-cluster-critical pod "coredns-5dc785954d-dhdhf" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-4pjqq				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-4pjqq" is pending
Pod	kube-system/kube-proxy-ip-172-20-48-74.sa-east-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-48-74.sa-east-1.compute.internal" is pending

Validation Failed
W0914 19:21:18.337839    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-50-202.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-41-171.sa-east-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-41-171.sa-east-1.compute.internal" is pending

Validation Failed
W0914 19:21:30.769751    4124 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 876 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:05.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-1890" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:05.609: INFO: Only supported for providers [openstack] (not aws)
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:06.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-780" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:06.334: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:07.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2303" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:08.051: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:09.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7046" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:09.644: INFO: Driver "local" does not provide raw block - skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:10.809: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 26 lines ...
Sep 14 19:24:03.647: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-9fb7186f-d9bb-4281-9aef-7278581b6f5d
STEP: Creating a pod to test consume configMaps
Sep 14 19:24:04.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46" in namespace "configmap-7425" to be "Succeeded or Failed"
Sep 14 19:24:04.378: INFO: Pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46": Phase="Pending", Reason="", readiness=false. Elapsed: 143.369741ms
Sep 14 19:24:06.523: INFO: Pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288052089s
Sep 14 19:24:08.670: INFO: Pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434618964s
Sep 14 19:24:10.819: INFO: Pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.583621752s
STEP: Saw pod success
Sep 14 19:24:10.819: INFO: Pod "pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46" satisfied condition "Succeeded or Failed"
Sep 14 19:24:10.972: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 14 19:24:11.290: INFO: Waiting for pod pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46 to disappear
Sep 14 19:24:11.437: INFO: Pod pod-configmaps-7f6ac556-cd48-4edc-a7ce-8ca3a92d8b46 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 72 lines ...
Sep 14 19:24:03.594: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-0f3447c6-7945-4ef7-b461-eb2e193670ef
STEP: Creating a pod to test consume secrets
Sep 14 19:24:04.172: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee" in namespace "projected-9633" to be "Succeeded or Failed"
Sep 14 19:24:04.315: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Pending", Reason="", readiness=false. Elapsed: 143.270101ms
Sep 14 19:24:06.459: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287135128s
Sep 14 19:24:08.604: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432342149s
Sep 14 19:24:10.750: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578087031s
Sep 14 19:24:12.896: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723972988s
Sep 14 19:24:15.041: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.869196678s
STEP: Saw pod success
Sep 14 19:24:15.041: INFO: Pod "pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee" satisfied condition "Succeeded or Failed"
Sep 14 19:24:15.185: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:24:15.485: INFO: Waiting for pod pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee to disappear
Sep 14 19:24:15.629: INFO: Pod pod-projected-secrets-b4621a38-2376-4f77-a0cc-0cb3cb3c6aee no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.064 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:06.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 14 19:24:07.210: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472" in namespace "security-context-test-3051" to be "Succeeded or Failed"
Sep 14 19:24:07.354: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472": Phase="Pending", Reason="", readiness=false. Elapsed: 143.619479ms
Sep 14 19:24:09.498: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287855896s
Sep 14 19:24:11.642: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43190662s
Sep 14 19:24:13.788: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576947061s
Sep 14 19:24:15.931: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.720844848s
Sep 14 19:24:15.932: INFO: Pod "busybox-user-65534-b8fac869-e00a-45bf-b7ed-d23a97fda472" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:15.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3051" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:16.271: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
• [SLOW TEST:13.480 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:16.509: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Sep 14 19:24:04.057: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7015" to be "Succeeded or Failed"
Sep 14 19:24:04.217: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 160.107465ms
Sep 14 19:24:06.362: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304935769s
Sep 14 19:24:08.507: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450079541s
Sep 14 19:24:10.665: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.60835726s
Sep 14 19:24:12.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75780354s
Sep 14 19:24:14.959: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.902366431s
Sep 14 19:24:17.232: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.175443932s
STEP: Saw pod success
Sep 14 19:24:17.232: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 14 19:24:17.376: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 14 19:24:17.685: INFO: Waiting for pod pod-host-path-test to disappear
Sep 14 19:24:17.828: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.228 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:18.268: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
W0914 19:24:05.245017    4808 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 14 19:24:05.245: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 14 19:24:05.692: INFO: Waiting up to 5m0s for pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943" in namespace "emptydir-1237" to be "Succeeded or Failed"
Sep 14 19:24:05.836: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 143.128283ms
Sep 14 19:24:07.980: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287158268s
Sep 14 19:24:10.123: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43045226s
Sep 14 19:24:12.266: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573854335s
Sep 14 19:24:14.410: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 8.717161762s
Sep 14 19:24:16.555: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862287551s
Sep 14 19:24:18.699: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.006572655s
STEP: Saw pod success
Sep 14 19:24:18.699: INFO: Pod "pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943" satisfied condition "Succeeded or Failed"
Sep 14 19:24:18.842: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943 container test-container: <nil>
STEP: delete the pod
Sep 14 19:24:19.133: INFO: Waiting for pod pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943 to disappear
Sep 14 19:24:19.276: INFO: Pod pod-7ae5e938-07dc-4a08-890a-cbfd1c9c0943 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.595 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:19.734: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Sep 14 19:24:16.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 14 19:24:17.281: INFO: Waiting up to 5m0s for pod "client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f" in namespace "containers-172" to be "Succeeded or Failed"
Sep 14 19:24:17.426: INFO: Pod "client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f": Phase="Pending", Reason="", readiness=false. Elapsed: 145.174734ms
Sep 14 19:24:19.570: INFO: Pod "client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288750356s
STEP: Saw pod success
Sep 14 19:24:19.570: INFO: Pod "client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f" satisfied condition "Succeeded or Failed"
Sep 14 19:24:19.713: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:24:20.007: INFO: Waiting for pod client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f to disappear
Sep 14 19:24:20.151: INFO: Pod client-containers-71a00b4e-eaf2-4570-8135-2346eeb8891f no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:21.333: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-7460e98d-5587-437c-a3e3-2275ac3ebd42
STEP: Creating a pod to test consume configMaps
Sep 14 19:24:18.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3" in namespace "projected-9618" to be "Succeeded or Failed"
Sep 14 19:24:18.720: INFO: Pod "pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.385111ms
Sep 14 19:24:20.864: INFO: Pod "pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28773723s
Sep 14 19:24:23.009: INFO: Pod "pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433044812s
STEP: Saw pod success
Sep 14 19:24:23.010: INFO: Pod "pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3" satisfied condition "Succeeded or Failed"
Sep 14 19:24:23.153: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3 container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:24:23.446: INFO: Waiting for pod pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3 to disappear
Sep 14 19:24:23.589: INFO: Pod pod-projected-configmaps-a783903d-5519-4912-a439-a57dfccab0b3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.309 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:23.922: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4ef0483a-5bd0-450e-a54d-7d819e8cd422
STEP: Creating a pod to test consume secrets
Sep 14 19:24:25.015: INFO: Waiting up to 5m0s for pod "pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6" in namespace "secrets-2378" to be "Succeeded or Failed"
Sep 14 19:24:25.159: INFO: Pod "pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.361581ms
Sep 14 19:24:27.305: INFO: Pod "pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289575814s
STEP: Saw pod success
Sep 14 19:24:27.305: INFO: Pod "pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6" satisfied condition "Succeeded or Failed"
Sep 14 19:24:27.449: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:24:27.747: INFO: Waiting for pod pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6 to disappear
Sep 14 19:24:27.890: INFO: Pod pod-secrets-d0b70da6-2d13-47e4-a9b7-2faa2519c2d6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:27.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2378" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:21.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 14 19:24:22.207: INFO: Waiting up to 5m0s for pod "pod-985330a3-527b-4c12-a446-e025bbe018c3" in namespace "emptydir-27" to be "Succeeded or Failed"
Sep 14 19:24:22.350: INFO: Pod "pod-985330a3-527b-4c12-a446-e025bbe018c3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.288667ms
Sep 14 19:24:24.495: INFO: Pod "pod-985330a3-527b-4c12-a446-e025bbe018c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287799517s
Sep 14 19:24:26.640: INFO: Pod "pod-985330a3-527b-4c12-a446-e025bbe018c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433162705s
Sep 14 19:24:28.787: INFO: Pod "pod-985330a3-527b-4c12-a446-e025bbe018c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580136911s
STEP: Saw pod success
Sep 14 19:24:28.787: INFO: Pod "pod-985330a3-527b-4c12-a446-e025bbe018c3" satisfied condition "Succeeded or Failed"
Sep 14 19:24:28.933: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-985330a3-527b-4c12-a446-e025bbe018c3 container test-container: <nil>
STEP: delete the pod
Sep 14 19:24:29.244: INFO: Waiting for pod pod-985330a3-527b-4c12-a446-e025bbe018c3 to disappear
Sep 14 19:24:29.387: INFO: Pod pod-985330a3-527b-4c12-a446-e025bbe018c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.334 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:33.279: INFO: Only supported for providers [openstack] (not aws)
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:34.909: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:35.855: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 138 lines ...
• [SLOW TEST:35.755 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:35.928 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":1,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 163 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver csi-hostpath doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:37.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-0541c948-0dea-4a4d-8a56-1b58d31b3802
STEP: Creating a pod to test consume configMaps
Sep 14 19:24:38.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb" in namespace "configmap-8599" to be "Succeeded or Failed"
Sep 14 19:24:38.179: INFO: Pod "pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.349376ms
Sep 14 19:24:40.323: INFO: Pod "pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287117308s
STEP: Saw pod success
Sep 14 19:24:40.323: INFO: Pod "pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb" satisfied condition "Succeeded or Failed"
Sep 14 19:24:40.467: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:24:40.764: INFO: Waiting for pod pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb to disappear
Sep 14 19:24:40.907: INFO: Pod pod-configmaps-5f39d60a-0162-4d79-9d35-c92fd97c5ddb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:40.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8599" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:14.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 128 lines ...
Sep 14 19:24:10.392: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-2271wqqc9
STEP: creating a claim
Sep 14 19:24:10.536: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Sep 14 19:24:10.847: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 14 19:24:11.184: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:13.472: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:15.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:17.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:19.474: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:21.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:23.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:25.476: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:27.477: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:29.472: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:31.472: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:33.472: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:35.474: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:37.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:39.472: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:41.473: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2271wqqc9",
  	... // 2 identical fields
  }

Sep 14 19:24:41.761: INFO: Error updating pvc awsthmpg: PersistentVolumeClaim "awsthmpg" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:42.509: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:42.812: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 104 lines ...
      Driver hostPathSymlink doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:11.882: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Sep 14 19:24:24.751: INFO: PersistentVolumeClaim pvc-cljhl found but phase is Pending instead of Bound.
Sep 14 19:24:26.896: INFO: PersistentVolumeClaim pvc-cljhl found and phase=Bound (2.289186468s)
Sep 14 19:24:26.896: INFO: Waiting up to 3m0s for PersistentVolume local-wzvrn to have phase Bound
Sep 14 19:24:27.042: INFO: PersistentVolume local-wzvrn found and phase=Bound (146.120917ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7cn5
STEP: Creating a pod to test subpath
Sep 14 19:24:27.481: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7cn5" in namespace "provisioning-7079" to be "Succeeded or Failed"
Sep 14 19:24:27.626: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 145.104285ms
Sep 14 19:24:29.770: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289099006s
Sep 14 19:24:31.914: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433452658s
Sep 14 19:24:34.059: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578663013s
Sep 14 19:24:36.204: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.723324398s
STEP: Saw pod success
Sep 14 19:24:36.204: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5" satisfied condition "Succeeded or Failed"
Sep 14 19:24:36.348: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-7cn5 container test-container-subpath-preprovisionedpv-7cn5: <nil>
STEP: delete the pod
Sep 14 19:24:36.645: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7cn5 to disappear
Sep 14 19:24:36.788: INFO: Pod pod-subpath-test-preprovisionedpv-7cn5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7cn5
Sep 14 19:24:36.788: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7cn5" in namespace "provisioning-7079"
STEP: Creating pod pod-subpath-test-preprovisionedpv-7cn5
STEP: Creating a pod to test subpath
Sep 14 19:24:37.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7cn5" in namespace "provisioning-7079" to be "Succeeded or Failed"
Sep 14 19:24:37.223: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 143.725577ms
Sep 14 19:24:39.368: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28861641s
Sep 14 19:24:41.513: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432834085s
Sep 14 19:24:43.658: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577815549s
STEP: Saw pod success
Sep 14 19:24:43.658: INFO: Pod "pod-subpath-test-preprovisionedpv-7cn5" satisfied condition "Succeeded or Failed"
Sep 14 19:24:43.803: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-7cn5 container test-container-subpath-preprovisionedpv-7cn5: <nil>
STEP: delete the pod
Sep 14 19:24:44.100: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7cn5 to disappear
Sep 14 19:24:44.244: INFO: Pod pod-subpath-test-preprovisionedpv-7cn5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7cn5
Sep 14 19:24:44.244: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7cn5" in namespace "provisioning-7079"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:47.227: INFO: Only supported for providers [openstack] (not aws)
... skipping 70 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-b8d6413d-44f0-4aa4-884c-8b653276d5cd
STEP: Creating a pod to test consume configMaps
Sep 14 19:24:41.640: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81" in namespace "projected-4767" to be "Succeeded or Failed"
Sep 14 19:24:41.784: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81": Phase="Pending", Reason="", readiness=false. Elapsed: 143.729458ms
Sep 14 19:24:43.930: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2898676s
Sep 14 19:24:46.075: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434741994s
Sep 14 19:24:48.220: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579404843s
Sep 14 19:24:50.364: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.724078331s
STEP: Saw pod success
Sep 14 19:24:50.364: INFO: Pod "pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81" satisfied condition "Succeeded or Failed"
Sep 14 19:24:50.508: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81 container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:24:50.801: INFO: Waiting for pod pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81 to disappear
Sep 14 19:24:50.944: INFO: Pod pod-projected-configmaps-678f5ca2-583b-456f-bd77-517f04bbff81 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.600 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":16,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 173 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-42da91c3-7c7b-4381-a1d4-2ffc14f247c1
STEP: Creating secret with name secret-projected-all-test-volume-188a426e-dd95-45f3-80db-dc69d1d4a08f
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 14 19:24:43.680: INFO: Waiting up to 5m0s for pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f" in namespace "projected-2507" to be "Succeeded or Failed"
Sep 14 19:24:43.828: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 147.519955ms
Sep 14 19:24:45.971: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291389719s
Sep 14 19:24:48.117: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436763154s
Sep 14 19:24:50.261: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580940728s
Sep 14 19:24:52.406: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.726243719s
STEP: Saw pod success
Sep 14 19:24:52.406: INFO: Pod "projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f" satisfied condition "Succeeded or Failed"
Sep 14 19:24:52.550: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f container projected-all-volume-test: <nil>
STEP: delete the pod
Sep 14 19:24:52.849: INFO: Waiting for pod projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f to disappear
Sep 14 19:24:52.993: INFO: Pod projected-volume-9b144b4f-eaa3-4cc0-b3b3-1617b51f1a7f no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.763 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:53.342: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:24:55.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1267" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":5,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:55.452: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 124 lines ...
• [SLOW TEST:13.492 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:55.836: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-916c1136-acb0-45bc-87e2-59fe6c716da3
STEP: Creating a pod to test consume configMaps
Sep 14 19:24:52.296: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4" in namespace "projected-8839" to be "Succeeded or Failed"
Sep 14 19:24:52.439: INFO: Pod "pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4": Phase="Pending", Reason="", readiness=false. Elapsed: 143.097154ms
Sep 14 19:24:54.583: INFO: Pod "pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286849813s
Sep 14 19:24:56.740: INFO: Pod "pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.444233568s
STEP: Saw pod success
Sep 14 19:24:56.740: INFO: Pod "pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4" satisfied condition "Succeeded or Failed"
Sep 14 19:24:56.885: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4 container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:24:57.185: INFO: Waiting for pod pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4 to disappear
Sep 14 19:24:57.328: INFO: Pod pod-projected-configmaps-61b1c3a8-73bc-4897-a0fc-9e9830aa29d4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.328 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:57.626: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
• [SLOW TEST:54.680 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:189
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
Sep 14 19:24:40.101: INFO: PersistentVolumeClaim pvc-866kv found and phase=Bound (10.862966482s)
Sep 14 19:24:40.102: INFO: Waiting up to 3m0s for PersistentVolume nfs-wnd5k to have phase Bound
Sep 14 19:24:40.245: INFO: PersistentVolume nfs-wnd5k found and phase=Bound (143.275313ms)
STEP: Checking pod has write access to PersistentVolume
Sep 14 19:24:40.531: INFO: Creating nfs test pod
Sep 14 19:24:40.676: INFO: Pod should terminate with exitcode 0 (success)
Sep 14 19:24:40.676: INFO: Waiting up to 5m0s for pod "pvc-tester-k9vfm" in namespace "pv-3121" to be "Succeeded or Failed"
Sep 14 19:24:40.819: INFO: Pod "pvc-tester-k9vfm": Phase="Pending", Reason="", readiness=false. Elapsed: 142.968856ms
Sep 14 19:24:42.973: INFO: Pod "pvc-tester-k9vfm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.296173759s
STEP: Saw pod success
Sep 14 19:24:42.973: INFO: Pod "pvc-tester-k9vfm" satisfied condition "Succeeded or Failed"
Sep 14 19:24:42.973: INFO: Pod pvc-tester-k9vfm succeeded 
Sep 14 19:24:42.973: INFO: Deleting pod "pvc-tester-k9vfm" in namespace "pv-3121"
Sep 14 19:24:43.122: INFO: Wait up to 5m0s for pod "pvc-tester-k9vfm" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 14 19:24:43.265: INFO: Deleting PVC pvc-866kv to trigger reclamation of PV nfs-wnd5k
Sep 14 19:24:43.265: INFO: Deleting PersistentVolumeClaim "pvc-866kv"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:24:58.762: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep 14 19:24:59.638: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3478" to be "Succeeded or Failed"
Sep 14 19:24:59.782: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 143.414768ms
Sep 14 19:25:01.927: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289007824s
Sep 14 19:25:04.074: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.436087638s
Sep 14 19:25:04.074: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:04.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3478" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:04.524: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 42 lines ...
• [SLOW TEST:23.202 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:05.236: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:12.022: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
Sep 14 19:24:38.536: INFO: PersistentVolumeClaim pvc-jhmm2 found but phase is Pending instead of Bound.
Sep 14 19:24:40.681: INFO: PersistentVolumeClaim pvc-jhmm2 found and phase=Bound (4.432083687s)
Sep 14 19:24:40.681: INFO: Waiting up to 3m0s for PersistentVolume local-wf6js to have phase Bound
Sep 14 19:24:40.824: INFO: PersistentVolume local-wf6js found and phase=Bound (142.956165ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-whb2
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:24:41.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-whb2" in namespace "provisioning-3866" to be "Succeeded or Failed"
Sep 14 19:24:41.401: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Pending", Reason="", readiness=false. Elapsed: 144.69446ms
Sep 14 19:24:43.546: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290000539s
Sep 14 19:24:45.693: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43677762s
Sep 14 19:24:47.839: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 6.583065248s
Sep 14 19:24:49.983: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 8.726984576s
Sep 14 19:24:52.127: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 10.870741895s
... skipping 2 lines ...
Sep 14 19:24:58.562: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 17.305877334s
Sep 14 19:25:00.706: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 19.449271s
Sep 14 19:25:02.852: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 21.595523801s
Sep 14 19:25:04.996: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Running", Reason="", readiness=true. Elapsed: 23.739216683s
Sep 14 19:25:07.139: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.883042797s
STEP: Saw pod success
Sep 14 19:25:07.140: INFO: Pod "pod-subpath-test-preprovisionedpv-whb2" satisfied condition "Succeeded or Failed"
Sep 14 19:25:07.283: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-whb2 container test-container-subpath-preprovisionedpv-whb2: <nil>
STEP: delete the pod
Sep 14 19:25:07.577: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-whb2 to disappear
Sep 14 19:25:07.720: INFO: Pod pod-subpath-test-preprovisionedpv-whb2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-whb2
Sep 14 19:25:07.720: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-whb2" in namespace "provisioning-3866"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:12.917: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:13.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5537" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:14.236: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:55.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:14.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6687" for this suite.


• [SLOW TEST:19.446 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":6,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep 14 19:25:09.181: INFO: PersistentVolumeClaim pvc-j2lxj found but phase is Pending instead of Bound.
Sep 14 19:25:11.325: INFO: PersistentVolumeClaim pvc-j2lxj found and phase=Bound (10.85885756s)
Sep 14 19:25:11.325: INFO: Waiting up to 3m0s for PersistentVolume local-jd6v9 to have phase Bound
Sep 14 19:25:11.467: INFO: PersistentVolume local-jd6v9 found and phase=Bound (142.531682ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zfvn
STEP: Creating a pod to test subpath
Sep 14 19:25:11.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zfvn" in namespace "provisioning-9337" to be "Succeeded or Failed"
Sep 14 19:25:12.040: INFO: Pod "pod-subpath-test-preprovisionedpv-zfvn": Phase="Pending", Reason="", readiness=false. Elapsed: 143.667154ms
Sep 14 19:25:14.185: INFO: Pod "pod-subpath-test-preprovisionedpv-zfvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28838901s
Sep 14 19:25:16.329: INFO: Pod "pod-subpath-test-preprovisionedpv-zfvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432470806s
STEP: Saw pod success
Sep 14 19:25:16.329: INFO: Pod "pod-subpath-test-preprovisionedpv-zfvn" satisfied condition "Succeeded or Failed"
Sep 14 19:25:16.472: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zfvn container test-container-subpath-preprovisionedpv-zfvn: <nil>
STEP: delete the pod
Sep 14 19:25:16.766: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zfvn to disappear
Sep 14 19:25:16.911: INFO: Pod pod-subpath-test-preprovisionedpv-zfvn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zfvn
Sep 14 19:25:16.912: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zfvn" in namespace "provisioning-9337"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
STEP: Registering the custom resource webhook via the AdmissionRegistration API
Sep 14 19:24:26.556: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:24:36.955: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:24:47.256: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:24:57.649: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:25:07.939: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:25:07.940: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 401 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:25:07.940: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1749
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":0,"skipped":4,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:21.198: INFO: Only supported for providers [azure] (not aws)
... skipping 47 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep 14 19:25:15.767: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:25:15.912: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lqr6
STEP: Creating a pod to test subpath
Sep 14 19:25:16.057: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lqr6" in namespace "provisioning-7917" to be "Succeeded or Failed"
Sep 14 19:25:16.201: INFO: Pod "pod-subpath-test-inlinevolume-lqr6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.124272ms
Sep 14 19:25:18.345: INFO: Pod "pod-subpath-test-inlinevolume-lqr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287706423s
Sep 14 19:25:20.490: INFO: Pod "pod-subpath-test-inlinevolume-lqr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432735937s
STEP: Saw pod success
Sep 14 19:25:20.490: INFO: Pod "pod-subpath-test-inlinevolume-lqr6" satisfied condition "Succeeded or Failed"
Sep 14 19:25:20.633: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-lqr6 container test-container-subpath-inlinevolume-lqr6: <nil>
STEP: delete the pod
Sep 14 19:25:20.927: INFO: Waiting for pod pod-subpath-test-inlinevolume-lqr6 to disappear
Sep 14 19:25:21.070: INFO: Pod pod-subpath-test-inlinevolume-lqr6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lqr6
Sep 14 19:25:21.070: INFO: Deleting pod "pod-subpath-test-inlinevolume-lqr6" in namespace "provisioning-7917"
... skipping 39 lines ...
Sep 14 19:25:08.804: INFO: PersistentVolumeClaim pvc-2vt2s found but phase is Pending instead of Bound.
Sep 14 19:25:10.948: INFO: PersistentVolumeClaim pvc-2vt2s found and phase=Bound (8.718261906s)
Sep 14 19:25:10.948: INFO: Waiting up to 3m0s for PersistentVolume local-2jbgh to have phase Bound
Sep 14 19:25:11.091: INFO: PersistentVolume local-2jbgh found and phase=Bound (143.30343ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8pt2
STEP: Creating a pod to test subpath
Sep 14 19:25:11.522: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8pt2" in namespace "provisioning-2162" to be "Succeeded or Failed"
Sep 14 19:25:11.666: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.556964ms
Sep 14 19:25:13.810: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288053828s
Sep 14 19:25:15.955: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433020396s
STEP: Saw pod success
Sep 14 19:25:15.955: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2" satisfied condition "Succeeded or Failed"
Sep 14 19:25:16.099: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-8pt2 container test-container-subpath-preprovisionedpv-8pt2: <nil>
STEP: delete the pod
Sep 14 19:25:16.395: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8pt2 to disappear
Sep 14 19:25:16.539: INFO: Pod pod-subpath-test-preprovisionedpv-8pt2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8pt2
Sep 14 19:25:16.539: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8pt2" in namespace "provisioning-2162"
STEP: Creating pod pod-subpath-test-preprovisionedpv-8pt2
STEP: Creating a pod to test subpath
Sep 14 19:25:16.831: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8pt2" in namespace "provisioning-2162" to be "Succeeded or Failed"
Sep 14 19:25:16.975: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.630342ms
Sep 14 19:25:19.120: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288576564s
STEP: Saw pod success
Sep 14 19:25:19.120: INFO: Pod "pod-subpath-test-preprovisionedpv-8pt2" satisfied condition "Succeeded or Failed"
Sep 14 19:25:19.267: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-8pt2 container test-container-subpath-preprovisionedpv-8pt2: <nil>
STEP: delete the pod
Sep 14 19:25:19.561: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8pt2 to disappear
Sep 14 19:25:19.704: INFO: Pod pod-subpath-test-preprovisionedpv-8pt2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8pt2
Sep 14 19:25:19.704: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8pt2" in namespace "provisioning-2162"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:21.736: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Sep 14 19:25:08.387: INFO: PersistentVolumeClaim pvc-jxfnt found but phase is Pending instead of Bound.
Sep 14 19:25:10.531: INFO: PersistentVolumeClaim pvc-jxfnt found and phase=Bound (13.010342793s)
Sep 14 19:25:10.531: INFO: Waiting up to 3m0s for PersistentVolume local-n762g to have phase Bound
Sep 14 19:25:10.675: INFO: PersistentVolume local-n762g found and phase=Bound (143.704664ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-bg5h
STEP: Creating a pod to test exec-volume-test
Sep 14 19:25:11.107: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bg5h" in namespace "volume-1184" to be "Succeeded or Failed"
Sep 14 19:25:11.250: INFO: Pod "exec-volume-test-preprovisionedpv-bg5h": Phase="Pending", Reason="", readiness=false. Elapsed: 143.276426ms
Sep 14 19:25:13.394: INFO: Pod "exec-volume-test-preprovisionedpv-bg5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287295874s
Sep 14 19:25:15.538: INFO: Pod "exec-volume-test-preprovisionedpv-bg5h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431125737s
Sep 14 19:25:17.683: INFO: Pod "exec-volume-test-preprovisionedpv-bg5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576576335s
STEP: Saw pod success
Sep 14 19:25:17.684: INFO: Pod "exec-volume-test-preprovisionedpv-bg5h" satisfied condition "Succeeded or Failed"
Sep 14 19:25:17.830: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-bg5h container exec-container-preprovisionedpv-bg5h: <nil>
STEP: delete the pod
Sep 14 19:25:18.131: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bg5h to disappear
Sep 14 19:25:18.276: INFO: Pod exec-volume-test-preprovisionedpv-bg5h no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-bg5h
Sep 14 19:25:18.276: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bg5h" in namespace "volume-1184"
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:22.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-416" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:21.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 14 19:25:22.522: INFO: Waiting up to 5m0s for pod "downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e" in namespace "downward-api-3869" to be "Succeeded or Failed"
Sep 14 19:25:22.665: INFO: Pod "downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e": Phase="Pending", Reason="", readiness=false. Elapsed: 143.022805ms
Sep 14 19:25:24.809: INFO: Pod "downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286800712s
STEP: Saw pod success
Sep 14 19:25:24.809: INFO: Pod "downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e" satisfied condition "Succeeded or Failed"
Sep 14 19:25:24.952: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e container dapi-container: <nil>
STEP: delete the pod
Sep 14 19:25:25.245: INFO: Waiting for pod downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e to disappear
Sep 14 19:25:25.388: INFO: Pod downward-api-2919cbdc-043f-47e8-bbfb-b14c5a88de1e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:25.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3869" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:25.690: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/capacity.go:111

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:20.456: INFO: >>> kubeConfig: /root/.kube/config
... skipping 72 lines ...
Sep 14 19:25:18.845: INFO: Waiting for pod aws-client to disappear
Sep 14 19:25:18.989: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 14 19:25:18.989: INFO: Deleting PersistentVolumeClaim "pvc-jwfr8"
Sep 14 19:25:19.136: INFO: Deleting PersistentVolume "aws-snqj9"
Sep 14 19:25:19.527: INFO: Couldn't delete PD "aws://sa-east-1a/vol-00a19cf8e05f01cdf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00a19cf8e05f01cdf is currently attached to i-0458a1602b5dcb9d9
	status code: 400, request id: 664dae38-45cf-487c-802b-9ebf9db4a8ba
Sep 14 19:25:25.444: INFO: Successfully deleted PD "aws://sa-east-1a/vol-00a19cf8e05f01cdf".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:25.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-966" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:25.753: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:22.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:26.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7695" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:26.411: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":1,"skipped":12,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:27.167: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Creating a kubernetes client
Sep 14 19:24:39.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 14 19:24:39.790: INFO: PodSpec: initContainers in spec.initContainers
Sep 14 19:25:28.149: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fd26b443-9e6d-4596-bc94-7805b91ccdc3", GenerateName:"", Namespace:"init-container-9554", SelfLink:"", UID:"1488b74d-5438-4fa7-9003-dfebcd318b4b", ResourceVersion:"3969", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63767244279, loc:(*time.Location)(0x9de2b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"790278674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b47ed8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b47ef0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b47f08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b47f20)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-7zcgz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002d22080), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-7zcgz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-7zcgz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-7zcgz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a2d8a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-50-202.sa-east-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b14700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a2d920)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a2d940)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a2d948), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a2d94c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002d41120), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244279, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244279, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244279, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244279, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.50.202", PodIP:"100.96.2.18", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.2.18"}}, StartTime:(*v1.Time)(0xc002b47f50), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b147e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b14850)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://f13f1095cd4060a2b08cde7d86d65b8f75c89cfa63648dcbfe3e5014e83555a4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d22120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d22100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a2d9cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:28.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9554" for this suite.


• [SLOW TEST:49.363 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:28.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9548" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":2,"skipped":22,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:29.272: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:26.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:29.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8739" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:29.887: INFO: Only supported for providers [gce gke] (not aws)
... skipping 119 lines ...
Sep 14 19:25:26.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 14 19:25:27.299: INFO: Waiting up to 5m0s for pod "pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e" in namespace "emptydir-6487" to be "Succeeded or Failed"
Sep 14 19:25:27.442: INFO: Pod "pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e": Phase="Pending", Reason="", readiness=false. Elapsed: 142.818332ms
Sep 14 19:25:29.586: INFO: Pod "pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28666311s
STEP: Saw pod success
Sep 14 19:25:29.586: INFO: Pod "pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e" satisfied condition "Succeeded or Failed"
Sep 14 19:25:29.729: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e container test-container: <nil>
STEP: delete the pod
Sep 14 19:25:30.022: INFO: Waiting for pod pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e to disappear
Sep 14 19:25:30.164: INFO: Pod pod-68ab0ffe-45e4-4c70-a5f9-eae688aeb94e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:30.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6487" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":7,"skipped":57,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:31.723: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
Sep 14 19:25:26.472: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 14 19:25:26.472: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3640 describe pod agnhost-primary-jfmm8'
Sep 14 19:25:27.299: INFO: stderr: ""
Sep 14 19:25:27.299: INFO: stdout: "Name:         agnhost-primary-jfmm8\nNamespace:    kubectl-3640\nPriority:     0\nNode:         ip-172-20-50-202.sa-east-1.compute.internal/172.20.50.202\nStart Time:   Tue, 14 Sep 2021 19:25:23 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.2.35\nIPs:\n  IP:           100.96.2.35\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://545297b78fdfbe668c02eba7f08649b8fb620f746aa36c22297e41287328658f\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 14 Sep 2021 19:25:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kck9s (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kck9s:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  4s    default-scheduler  Successfully assigned kubectl-3640/agnhost-primary-jfmm8 to ip-172-20-50-202.sa-east-1.compute.internal\n  Normal  Pulled     3s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    3s    kubelet            Created container agnhost-primary\n  Normal  Started    3s    kubelet            Started container agnhost-primary\n"
Sep 14 19:25:27.300: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3640 describe rc agnhost-primary'
Sep 14 19:25:28.288: INFO: stderr: ""
Sep 14 19:25:28.288: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-3640\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-primary-jfmm8\n"
Sep 14 19:25:28.288: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3640 describe service agnhost-primary'
Sep 14 19:25:29.245: INFO: stderr: ""
Sep 14 19:25:29.245: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-3640\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.68.13.182\nIPs:               100.68.13.182\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.2.35:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 14 19:25:29.389: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3640 describe node ip-172-20-38-237.sa-east-1.compute.internal'
Sep 14 19:25:30.646: INFO: stderr: ""
Sep 14 19:25:30.646: INFO: stdout: "Name:               ip-172-20-38-237.sa-east-1.compute.internal\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=c5.large\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=sa-east-1\n                    failure-domain.beta.kubernetes.io/zone=sa-east-1a\n                    kops.k8s.io/instancegroup=master-sa-east-1a\n                    kops.k8s.io/kops-controller-pki=\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-38-237.sa-east-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=master\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\n                    node.kubernetes.io/instance-type=c5.large\n                    topology.kubernetes.io/region=sa-east-1\n                    topology.kubernetes.io/zone=sa-east-1a\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 14 Sep 2021 19:19:21 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-38-237.sa-east-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Tue, 14 Sep 2021 19:25:28 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 14 Sep 2021 19:25:13 +0000   Tue, 14 Sep 2021 19:19:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 14 Sep 2021 19:25:13 +0000   Tue, 14 Sep 2021 19:19:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 14 Sep 2021 19:25:13 +0000   Tue, 14 Sep 2021 19:19:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 14 Sep 2021 19:25:13 +0000   Tue, 14 Sep 2021 19:19:38 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.38.237\n  ExternalIP:   52.67.190.173\n  Hostname:     ip-172-20-38-237.sa-east-1.compute.internal\n  InternalDNS:  ip-172-20-38-237.sa-east-1.compute.internal\n  ExternalDNS:  ec2-52-67-190-173.sa-east-1.compute.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           46343520Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3781940Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           42710187962\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3679540Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec2479a1c3270f264028a6995d8b2bc1\n  System UUID:                ec2479a1-c327-0f26-4028-a6995d8b2bc1\n  Boot ID:                    273f82a2-771e-4078-a92c-90735aa7a38d\n  Kernel Version:             5.10.61-flatcar\n  OS Image:                   Flatcar Container Linux by Kinvolk 2905.2.3 (Oklo)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.5.4\n  Kubelet Version:            v1.21.4\n  Kube-Proxy Version:         v1.21.4\nPodCIDR:                      100.96.0.0/24\nPodCIDRs:                     100.96.0.0/24\nProviderID:                   aws:///sa-east-1a/i-02430e901bb78d60b\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                 dns-controller-56b8dc9b5b-6ffl4                                        50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         5m50s\n  kube-system                 etcd-manager-events-ip-172-20-38-237.sa-east-1.compute.internal        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m5s\n  kube-system                 etcd-manager-main-ip-172-20-38-237.sa-east-1.compute.internal          200m (10%)    0 (0%)      100Mi (2%)       0 (0%)         5m23s\n  kube-system                 kopeio-networking-agent-7jdt6                                          50m (2%)      0 (0%)      100Mi (2%)       100Mi (2%)     5m50s\n  kube-system                 kops-controller-xmsk4                                                  50m (2%)      0 (0%)      50Mi (1%)        0 (0%)         5m50s\n  kube-system                 kube-apiserver-ip-172-20-38-237.sa-east-1.compute.internal             150m (7%)     0 (0%)      0 (0%)           0 (0%)         5m2s\n  kube-system                 kube-controller-manager-ip-172-20-38-237.sa-east-1.compute.internal    100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s\n  kube-system                 kube-proxy-ip-172-20-38-237.sa-east-1.compute.internal                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s\n  kube-system                 kube-scheduler-ip-172-20-38-237.sa-east-1.compute.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests     Limits\n  --------                    --------     ------\n  cpu                         900m (45%)   0 (0%)\n  memory                      400Mi (11%)  100Mi (2%)\n  ephemeral-storage           0 (0%)       0 (0%)\n  hugepages-1Gi               0 (0%)       0 (0%)\n  hugepages-2Mi               0 (0%)       0 (0%)\n  attachable-volumes-aws-ebs  0            0\nEvents:\n  Type     Reason                   Age                    From        Message\n  ----     ------                   ----                   ----        -------\n  Normal   Starting                 6m54s                  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      6m54s                  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  6m54s                  kubelet     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  6m53s (x8 over 6m54s)  kubelet     Node ip-172-20-38-237.sa-east-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    6m53s (x7 over 6m54s)  kubelet     Node ip-172-20-38-237.sa-east-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     6m53s (x7 over 6m54s)  kubelet     Node ip-172-20-38-237.sa-east-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   Starting                 6m1s                   kube-proxy  Starting kube-proxy.\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:31.984: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-dc06d103-859a-42e3-a26f-343369b95a88
STEP: Creating a pod to test consume secrets
Sep 14 19:25:26.707: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56" in namespace "projected-8140" to be "Succeeded or Failed"
Sep 14 19:25:26.851: INFO: Pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56": Phase="Pending", Reason="", readiness=false. Elapsed: 144.118575ms
Sep 14 19:25:28.994: INFO: Pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287587434s
Sep 14 19:25:31.138: INFO: Pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431118246s
Sep 14 19:25:33.281: INFO: Pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574323036s
STEP: Saw pod success
Sep 14 19:25:33.281: INFO: Pod "pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56" satisfied condition "Succeeded or Failed"
Sep 14 19:25:33.424: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:25:33.719: INFO: Waiting for pod pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56 to disappear
Sep 14 19:25:33.862: INFO: Pod pod-projected-secrets-cfb45cb8-01b7-475b-aa6e-73af7bbb1c56 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.450 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 14 19:25:31.203: INFO: Waiting up to 5m0s for pod "pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d" in namespace "emptydir-8084" to be "Succeeded or Failed"
Sep 14 19:25:31.352: INFO: Pod "pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 148.481323ms
Sep 14 19:25:33.499: INFO: Pod "pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.295290362s
STEP: Saw pod success
Sep 14 19:25:33.499: INFO: Pod "pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d" satisfied condition "Succeeded or Failed"
Sep 14 19:25:33.642: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d container test-container: <nil>
STEP: delete the pod
Sep 14 19:25:33.935: INFO: Waiting for pod pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d to disappear
Sep 14 19:25:34.078: INFO: Pod pod-a5d8b5ca-4ac9-4a6c-adfc-9b0e05959a2d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 32 lines ...
Sep 14 19:25:23.308: INFO: PersistentVolumeClaim pvc-9rhgq found but phase is Pending instead of Bound.
Sep 14 19:25:25.451: INFO: PersistentVolumeClaim pvc-9rhgq found and phase=Bound (15.150834392s)
Sep 14 19:25:25.451: INFO: Waiting up to 3m0s for PersistentVolume local-rdp6q to have phase Bound
Sep 14 19:25:25.595: INFO: PersistentVolume local-rdp6q found and phase=Bound (143.137863ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c6m9
STEP: Creating a pod to test subpath
Sep 14 19:25:26.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c6m9" in namespace "provisioning-7589" to be "Succeeded or Failed"
Sep 14 19:25:26.170: INFO: Pod "pod-subpath-test-preprovisionedpv-c6m9": Phase="Pending", Reason="", readiness=false. Elapsed: 143.66753ms
Sep 14 19:25:28.314: INFO: Pod "pod-subpath-test-preprovisionedpv-c6m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288299553s
Sep 14 19:25:30.459: INFO: Pod "pod-subpath-test-preprovisionedpv-c6m9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432981219s
Sep 14 19:25:32.603: INFO: Pod "pod-subpath-test-preprovisionedpv-c6m9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576998006s
STEP: Saw pod success
Sep 14 19:25:32.603: INFO: Pod "pod-subpath-test-preprovisionedpv-c6m9" satisfied condition "Succeeded or Failed"
Sep 14 19:25:32.746: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-c6m9 container test-container-subpath-preprovisionedpv-c6m9: <nil>
STEP: delete the pod
Sep 14 19:25:33.042: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c6m9 to disappear
Sep 14 19:25:33.185: INFO: Pod pod-subpath-test-preprovisionedpv-c6m9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c6m9
Sep 14 19:25:33.185: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c6m9" in namespace "provisioning-7589"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:37.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:38.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-1154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:38.436: INFO: Only supported for providers [azure] (not aws)
... skipping 67 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:828
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":6,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:40.711: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
• [SLOW TEST:98.157 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:41.297: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
• [SLOW TEST:11.590 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:30.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:41.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3206" for this suite.


• [SLOW TEST:11.292 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":8,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:41.799: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:42.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":7,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:42.776: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Sep 14 19:24:47.030: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-pwx22] to have phase Bound
Sep 14 19:24:47.175: INFO: PersistentVolumeClaim pvc-pwx22 found and phase=Bound (144.830901ms)
STEP: Deleting the previously created pod
Sep 14 19:25:09.895: INFO: Deleting pod "pvc-volume-tester-kt6c2" in namespace "csi-mock-volumes-3118"
Sep 14 19:25:10.041: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kt6c2" to be fully deleted
STEP: Checking CSI driver logs
Sep 14 19:25:16.474: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/09edfe49-8bef-45dc-917f-693e5677d886/volumes/kubernetes.io~csi/pvc-94d62391-e283-48cd-a175-d3240f248aa3/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-kt6c2
Sep 14 19:25:16.474: INFO: Deleting pod "pvc-volume-tester-kt6c2" in namespace "csi-mock-volumes-3118"
STEP: Deleting claim pvc-pwx22
Sep 14 19:25:16.908: INFO: Waiting up to 2m0s for PersistentVolume pvc-94d62391-e283-48cd-a175-d3240f248aa3 to get deleted
Sep 14 19:25:17.052: INFO: PersistentVolume pvc-94d62391-e283-48cd-a175-d3240f248aa3 found and phase=Released (143.070624ms)
Sep 14 19:25:19.197: INFO: PersistentVolume pvc-94d62391-e283-48cd-a175-d3240f248aa3 was removed
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":3,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:42.798: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:42.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-7180" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":8,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:43.303: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:100.316 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:43.418: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
• [SLOW TEST:49.933 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:44.338: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
• [SLOW TEST:13.043 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:45.077: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:45.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":4,"skipped":29,"failed":0}

SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:22.323: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep 14 19:25:37.814: INFO: PersistentVolumeClaim pvc-6nqfp found but phase is Pending instead of Bound.
Sep 14 19:25:39.958: INFO: PersistentVolumeClaim pvc-6nqfp found and phase=Bound (13.021091099s)
Sep 14 19:25:39.958: INFO: Waiting up to 3m0s for PersistentVolume local-bhjx8 to have phase Bound
Sep 14 19:25:40.101: INFO: PersistentVolume local-bhjx8 found and phase=Bound (143.459941ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rqnp
STEP: Creating a pod to test subpath
Sep 14 19:25:40.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rqnp" in namespace "provisioning-1181" to be "Succeeded or Failed"
Sep 14 19:25:40.682: INFO: Pod "pod-subpath-test-preprovisionedpv-rqnp": Phase="Pending", Reason="", readiness=false. Elapsed: 148.320754ms
Sep 14 19:25:42.826: INFO: Pod "pod-subpath-test-preprovisionedpv-rqnp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292833434s
Sep 14 19:25:44.972: INFO: Pod "pod-subpath-test-preprovisionedpv-rqnp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.438570576s
STEP: Saw pod success
Sep 14 19:25:44.972: INFO: Pod "pod-subpath-test-preprovisionedpv-rqnp" satisfied condition "Succeeded or Failed"
Sep 14 19:25:45.116: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rqnp container test-container-subpath-preprovisionedpv-rqnp: <nil>
STEP: delete the pod
Sep 14 19:25:45.412: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rqnp to disappear
Sep 14 19:25:45.556: INFO: Pod pod-subpath-test-preprovisionedpv-rqnp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rqnp
Sep 14 19:25:45.556: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rqnp" in namespace "provisioning-1181"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 20 lines ...
Sep 14 19:25:38.576: INFO: PersistentVolumeClaim pvc-kmsz4 found but phase is Pending instead of Bound.
Sep 14 19:25:40.720: INFO: PersistentVolumeClaim pvc-kmsz4 found and phase=Bound (6.618610942s)
Sep 14 19:25:40.720: INFO: Waiting up to 3m0s for PersistentVolume local-fqgbq to have phase Bound
Sep 14 19:25:40.862: INFO: PersistentVolume local-fqgbq found and phase=Bound (142.516887ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-trff
STEP: Creating a pod to test exec-volume-test
Sep 14 19:25:41.292: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-trff" in namespace "volume-6741" to be "Succeeded or Failed"
Sep 14 19:25:41.435: INFO: Pod "exec-volume-test-preprovisionedpv-trff": Phase="Pending", Reason="", readiness=false. Elapsed: 142.812135ms
Sep 14 19:25:43.578: INFO: Pod "exec-volume-test-preprovisionedpv-trff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.285842516s
STEP: Saw pod success
Sep 14 19:25:43.578: INFO: Pod "exec-volume-test-preprovisionedpv-trff" satisfied condition "Succeeded or Failed"
Sep 14 19:25:43.721: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-trff container exec-container-preprovisionedpv-trff: <nil>
STEP: delete the pod
Sep 14 19:25:44.019: INFO: Waiting for pod exec-volume-test-preprovisionedpv-trff to disappear
Sep 14 19:25:44.169: INFO: Pod exec-volume-test-preprovisionedpv-trff no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-trff
Sep 14 19:25:44.169: INFO: Deleting pod "exec-volume-test-preprovisionedpv-trff" in namespace "volume-6741"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:48.089: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":3,"skipped":29,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:34.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":4,"skipped":29,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:50.556: INFO: Only supported for providers [openstack] (not aws)
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":8,"skipped":70,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:42.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 14 19:25:43.623: INFO: Waiting up to 5m0s for pod "pod-fb121729-9944-4456-9036-53c26928eed6" in namespace "emptydir-3013" to be "Succeeded or Failed"
Sep 14 19:25:43.767: INFO: Pod "pod-fb121729-9944-4456-9036-53c26928eed6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.504786ms
Sep 14 19:25:45.911: INFO: Pod "pod-fb121729-9944-4456-9036-53c26928eed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28736161s
Sep 14 19:25:48.056: INFO: Pod "pod-fb121729-9944-4456-9036-53c26928eed6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432699721s
Sep 14 19:25:50.200: INFO: Pod "pod-fb121729-9944-4456-9036-53c26928eed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577242612s
STEP: Saw pod success
Sep 14 19:25:50.200: INFO: Pod "pod-fb121729-9944-4456-9036-53c26928eed6" satisfied condition "Succeeded or Failed"
Sep 14 19:25:50.344: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-fb121729-9944-4456-9036-53c26928eed6 container test-container: <nil>
STEP: delete the pod
Sep 14 19:25:50.636: INFO: Waiting for pod pod-fb121729-9944-4456-9036-53c26928eed6 to disappear
Sep 14 19:25:50.780: INFO: Pod pod-fb121729-9944-4456-9036-53c26928eed6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.334 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:51.145: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 114 lines ...
Sep 14 19:24:38.309: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4tstd] to have phase Bound
Sep 14 19:24:38.453: INFO: PersistentVolumeClaim pvc-4tstd found and phase=Bound (143.611003ms)
STEP: Deleting the previously created pod
Sep 14 19:24:59.175: INFO: Deleting pod "pvc-volume-tester-jg7vm" in namespace "csi-mock-volumes-1478"
Sep 14 19:24:59.320: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jg7vm" to be fully deleted
STEP: Checking CSI driver logs
Sep 14 19:25:03.963: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5f59b9d0-a68c-4527-9467-992e8c3b864a/volumes/kubernetes.io~csi/pvc-747d61e5-82bf-4c78-9d66-7526b0c31496/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-jg7vm
Sep 14 19:25:03.963: INFO: Deleting pod "pvc-volume-tester-jg7vm" in namespace "csi-mock-volumes-1478"
STEP: Deleting claim pvc-4tstd
Sep 14 19:25:04.395: INFO: Waiting up to 2m0s for PersistentVolume pvc-747d61e5-82bf-4c78-9d66-7526b0c31496 to get deleted
Sep 14 19:25:04.539: INFO: PersistentVolume pvc-747d61e5-82bf-4c78-9d66-7526b0c31496 found and phase=Released (143.53435ms)
Sep 14 19:25:06.683: INFO: PersistentVolume pvc-747d61e5-82bf-4c78-9d66-7526b0c31496 found and phase=Released (2.287341495s)
... skipping 58 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:25:48.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a" in namespace "projected-5203" to be "Succeeded or Failed"
Sep 14 19:25:49.124: INFO: Pod "downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a": Phase="Pending", Reason="", readiness=false. Elapsed: 144.093294ms
Sep 14 19:25:51.288: INFO: Pod "downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307529039s
Sep 14 19:25:53.436: INFO: Pod "downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.455105903s
STEP: Saw pod success
Sep 14 19:25:53.436: INFO: Pod "downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a" satisfied condition "Succeeded or Failed"
Sep 14 19:25:53.578: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a container client-container: <nil>
STEP: delete the pod
Sep 14 19:25:53.876: INFO: Waiting for pod downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a to disappear
Sep 14 19:25:54.019: INFO: Pod downwardapi-volume-0c49cb71-8bbb-4ff5-ae66-0eef6f4f394a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.189 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:54.317: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:24:42.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
Sep 14 19:25:15.717: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-p2nbj] to have phase Bound
Sep 14 19:25:15.860: INFO: PersistentVolumeClaim pvc-p2nbj found and phase=Bound (143.344574ms)
STEP: Deleting the previously created pod
Sep 14 19:25:22.580: INFO: Deleting pod "pvc-volume-tester-2hjrj" in namespace "csi-mock-volumes-6288"
Sep 14 19:25:22.725: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2hjrj" to be fully deleted
STEP: Checking CSI driver logs
Sep 14 19:25:37.173: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0587bdc6-31c0-4c3b-bc56-235a768d3f16/volumes/kubernetes.io~csi/pvc-4b2ef820-2f64-40fe-99ba-5fe9efa277c0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-2hjrj
Sep 14 19:25:37.173: INFO: Deleting pod "pvc-volume-tester-2hjrj" in namespace "csi-mock-volumes-6288"
STEP: Deleting claim pvc-p2nbj
Sep 14 19:25:37.604: INFO: Waiting up to 2m0s for PersistentVolume pvc-4b2ef820-2f64-40fe-99ba-5fe9efa277c0 to get deleted
Sep 14 19:25:37.747: INFO: PersistentVolume pvc-4b2ef820-2f64-40fe-99ba-5fe9efa277c0 was removed
STEP: Deleting storageclass csi-mock-volumes-6288-scq6ph6
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should not be plumbed down when CSIDriver is not deployed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:25:55.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5126" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":5,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:25:55.802: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
• [SLOW TEST:81.653 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:52.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:25:55.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3" in namespace "projected-6725" to be "Succeeded or Failed"
Sep 14 19:25:56.056: INFO: Pod "downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3": Phase="Pending", Reason="", readiness=false. Elapsed: 142.947385ms
Sep 14 19:25:58.200: INFO: Pod "downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287102276s
Sep 14 19:26:00.344: INFO: Pod "downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43082997s
STEP: Saw pod success
Sep 14 19:26:00.344: INFO: Pod "downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3" satisfied condition "Succeeded or Failed"
Sep 14 19:26:00.489: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3 container client-container: <nil>
STEP: delete the pod
Sep 14 19:26:00.784: INFO: Waiting for pod downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3 to disappear
Sep 14 19:26:00.927: INFO: Pod downwardapi-volume-c9c469a8-fc14-481c-9f2b-86371448a3d3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.169 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":1,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:01.279: INFO: Only supported for providers [openstack] (not aws)
... skipping 43 lines ...
Sep 14 19:25:53.904: INFO: PersistentVolumeClaim pvc-bcrs7 found but phase is Pending instead of Bound.
Sep 14 19:25:56.048: INFO: PersistentVolumeClaim pvc-bcrs7 found and phase=Bound (13.006941526s)
Sep 14 19:25:56.048: INFO: Waiting up to 3m0s for PersistentVolume local-pn74r to have phase Bound
Sep 14 19:25:56.192: INFO: PersistentVolume local-pn74r found and phase=Bound (144.12077ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2wrv
STEP: Creating a pod to test exec-volume-test
Sep 14 19:25:56.622: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2wrv" in namespace "volume-9010" to be "Succeeded or Failed"
Sep 14 19:25:56.766: INFO: Pod "exec-volume-test-preprovisionedpv-2wrv": Phase="Pending", Reason="", readiness=false. Elapsed: 143.259539ms
Sep 14 19:25:58.910: INFO: Pod "exec-volume-test-preprovisionedpv-2wrv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287187068s
STEP: Saw pod success
Sep 14 19:25:58.910: INFO: Pod "exec-volume-test-preprovisionedpv-2wrv" satisfied condition "Succeeded or Failed"
Sep 14 19:25:59.053: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-2wrv container exec-container-preprovisionedpv-2wrv: <nil>
STEP: delete the pod
Sep 14 19:25:59.359: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2wrv to disappear
Sep 14 19:25:59.505: INFO: Pod exec-volume-test-preprovisionedpv-2wrv no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2wrv
Sep 14 19:25:59.505: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2wrv" in namespace "volume-9010"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:01.360: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 38 lines ...
Sep 14 19:25:54.233: INFO: PersistentVolumeClaim pvc-ggwrb found but phase is Pending instead of Bound.
Sep 14 19:25:56.378: INFO: PersistentVolumeClaim pvc-ggwrb found and phase=Bound (2.289676749s)
Sep 14 19:25:56.378: INFO: Waiting up to 3m0s for PersistentVolume local-2mcjt to have phase Bound
Sep 14 19:25:56.526: INFO: PersistentVolume local-2mcjt found and phase=Bound (148.158767ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-sdxn
STEP: Creating a pod to test exec-volume-test
Sep 14 19:25:56.960: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-sdxn" in namespace "volume-8302" to be "Succeeded or Failed"
Sep 14 19:25:57.105: INFO: Pod "exec-volume-test-preprovisionedpv-sdxn": Phase="Pending", Reason="", readiness=false. Elapsed: 144.428534ms
Sep 14 19:25:59.250: INFO: Pod "exec-volume-test-preprovisionedpv-sdxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289935714s
Sep 14 19:26:01.398: INFO: Pod "exec-volume-test-preprovisionedpv-sdxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437622319s
STEP: Saw pod success
Sep 14 19:26:01.398: INFO: Pod "exec-volume-test-preprovisionedpv-sdxn" satisfied condition "Succeeded or Failed"
Sep 14 19:26:01.542: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-sdxn container exec-container-preprovisionedpv-sdxn: <nil>
STEP: delete the pod
Sep 14 19:26:01.843: INFO: Waiting for pod exec-volume-test-preprovisionedpv-sdxn to disappear
Sep 14 19:26:01.987: INFO: Pod exec-volume-test-preprovisionedpv-sdxn no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-sdxn
Sep 14 19:26:01.987: INFO: Deleting pod "exec-volume-test-preprovisionedpv-sdxn" in namespace "volume-8302"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:03.879: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:25:57.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 14 19:25:58.813: INFO: Waiting up to 5m0s for pod "pod-9af7cb4b-28d2-417d-8f99-7020f0252442" in namespace "emptydir-1516" to be "Succeeded or Failed"
Sep 14 19:25:58.957: INFO: Pod "pod-9af7cb4b-28d2-417d-8f99-7020f0252442": Phase="Pending", Reason="", readiness=false. Elapsed: 143.725788ms
Sep 14 19:26:01.102: INFO: Pod "pod-9af7cb4b-28d2-417d-8f99-7020f0252442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288424144s
Sep 14 19:26:03.296: INFO: Pod "pod-9af7cb4b-28d2-417d-8f99-7020f0252442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.482410044s
STEP: Saw pod success
Sep 14 19:26:03.296: INFO: Pod "pod-9af7cb4b-28d2-417d-8f99-7020f0252442" satisfied condition "Succeeded or Failed"
Sep 14 19:26:03.495: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-9af7cb4b-28d2-417d-8f99-7020f0252442 container test-container: <nil>
STEP: delete the pod
Sep 14 19:26:03.833: INFO: Waiting for pod pod-9af7cb4b-28d2-417d-8f99-7020f0252442 to disappear
Sep 14 19:26:03.997: INFO: Pod pod-9af7cb4b-28d2-417d-8f99-7020f0252442 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.340 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:04.296: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
Sep 14 19:25:55.817: INFO: Got stdout from 18.231.6.118:22: Hello from core@ip-172-20-50-202.sa-east-1.compute.internal
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Sep 14 19:25:59.157: INFO: Got stdout from 52.67.190.173:22: stdout
Sep 14 19:25:59.157: INFO: Got stderr from 52.67.190.173:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing core@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:04.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-481" for this suite.


• [SLOW TEST:18.493 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:04.460: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
STEP: Destroying namespace "node-problem-detector-5832" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.020 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:08.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8596" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:09.121: INFO: Only supported for providers [gce gke] (not aws)
... skipping 224 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-65e5de22-5073-4ce9-a475-1bc20fae9186
STEP: Creating a pod to test consume configMaps
Sep 14 19:26:02.336: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a" in namespace "projected-7946" to be "Succeeded or Failed"
Sep 14 19:26:02.479: INFO: Pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a": Phase="Pending", Reason="", readiness=false. Elapsed: 143.382334ms
Sep 14 19:26:04.625: INFO: Pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288781882s
Sep 14 19:26:06.768: INFO: Pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432146945s
Sep 14 19:26:08.913: INFO: Pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577682088s
STEP: Saw pod success
Sep 14 19:26:08.914: INFO: Pod "pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a" satisfied condition "Succeeded or Failed"
Sep 14 19:26:09.057: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:26:09.362: INFO: Waiting for pod pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a to disappear
Sep 14 19:26:09.529: INFO: Pod pod-projected-configmaps-2fa274bc-7817-4312-b55f-630c9d57989a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.529 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:09.854: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":7,"skipped":32,"failed":0}
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:08.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Sep 14 19:26:09.259: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3547" to be "Succeeded or Failed"
Sep 14 19:26:09.433: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 173.363822ms
Sep 14 19:26:11.577: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317497103s
STEP: Saw pod success
Sep 14 19:26:11.577: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 14 19:26:11.720: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 14 19:26:12.013: INFO: Waiting for pod pod-host-path-test to disappear
Sep 14 19:26:12.156: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:12.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3547" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":8,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:12.460: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:09.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-8f13fef9-c8a8-426b-a66c-08475b3fd5c8
STEP: Creating a pod to test consume configMaps
Sep 14 19:26:10.662: INFO: Waiting up to 5m0s for pod "pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839" in namespace "configmap-6456" to be "Succeeded or Failed"
Sep 14 19:26:10.805: INFO: Pod "pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839": Phase="Pending", Reason="", readiness=false. Elapsed: 142.794572ms
Sep 14 19:26:12.949: INFO: Pod "pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839": Phase="Running", Reason="", readiness=true. Elapsed: 2.286695979s
Sep 14 19:26:15.093: INFO: Pod "pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430646462s
STEP: Saw pod success
Sep 14 19:26:15.093: INFO: Pod "pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839" satisfied condition "Succeeded or Failed"
Sep 14 19:26:15.236: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839 container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:26:15.535: INFO: Waiting for pod pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839 to disappear
Sep 14 19:26:15.679: INFO: Pod pod-configmaps-bfcc6e19-d4fc-46a9-8eed-8c39f4dd6839 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.333 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:09.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 14 19:26:10.716: INFO: Waiting up to 5m0s for pod "downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0" in namespace "downward-api-9298" to be "Succeeded or Failed"
Sep 14 19:26:10.859: INFO: Pod "downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.723097ms
Sep 14 19:26:13.004: INFO: Pod "downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288293083s
Sep 14 19:26:15.149: INFO: Pod "downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433351075s
STEP: Saw pod success
Sep 14 19:26:15.149: INFO: Pod "downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0" satisfied condition "Succeeded or Failed"
Sep 14 19:26:15.293: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0 container dapi-container: <nil>
STEP: delete the pod
Sep 14 19:26:15.589: INFO: Waiting for pod downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0 to disappear
Sep 14 19:26:15.734: INFO: Pod downward-api-cadb0c4b-6070-4bbe-af29-593b439905f0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.177 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:16.046: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 23 lines ...
Sep 14 19:26:12.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 14 19:26:13.372: INFO: Waiting up to 5m0s for pod "var-expansion-80584e36-f37e-4bc6-af33-b559882530bc" in namespace "var-expansion-47" to be "Succeeded or Failed"
Sep 14 19:26:13.515: INFO: Pod "var-expansion-80584e36-f37e-4bc6-af33-b559882530bc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.014725ms
Sep 14 19:26:15.659: INFO: Pod "var-expansion-80584e36-f37e-4bc6-af33-b559882530bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287431794s
STEP: Saw pod success
Sep 14 19:26:15.659: INFO: Pod "var-expansion-80584e36-f37e-4bc6-af33-b559882530bc" satisfied condition "Succeeded or Failed"
Sep 14 19:26:15.803: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod var-expansion-80584e36-f37e-4bc6-af33-b559882530bc container dapi-container: <nil>
STEP: delete the pod
Sep 14 19:26:16.096: INFO: Waiting for pod var-expansion-80584e36-f37e-4bc6-af33-b559882530bc to disappear
Sep 14 19:26:16.240: INFO: Pod var-expansion-80584e36-f37e-4bc6-af33-b559882530bc no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 40 lines ...
• [SLOW TEST:62.604 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Sep 14 19:26:08.698: INFO: PersistentVolumeClaim pvc-fbbvl found but phase is Pending instead of Bound.
Sep 14 19:26:10.843: INFO: PersistentVolumeClaim pvc-fbbvl found and phase=Bound (13.011107752s)
Sep 14 19:26:10.843: INFO: Waiting up to 3m0s for PersistentVolume local-qd256 to have phase Bound
Sep 14 19:26:10.986: INFO: PersistentVolume local-qd256 found and phase=Bound (143.338931ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9xb7
STEP: Creating a pod to test subpath
Sep 14 19:26:11.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9xb7" in namespace "provisioning-2625" to be "Succeeded or Failed"
Sep 14 19:26:11.563: INFO: Pod "pod-subpath-test-preprovisionedpv-9xb7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.394101ms
Sep 14 19:26:13.708: INFO: Pod "pod-subpath-test-preprovisionedpv-9xb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288379157s
Sep 14 19:26:15.852: INFO: Pod "pod-subpath-test-preprovisionedpv-9xb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432452917s
STEP: Saw pod success
Sep 14 19:26:15.852: INFO: Pod "pod-subpath-test-preprovisionedpv-9xb7" satisfied condition "Succeeded or Failed"
Sep 14 19:26:15.996: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9xb7 container test-container-subpath-preprovisionedpv-9xb7: <nil>
STEP: delete the pod
Sep 14 19:26:16.298: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9xb7 to disappear
Sep 14 19:26:16.442: INFO: Pod pod-subpath-test-preprovisionedpv-9xb7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9xb7
Sep 14 19:26:16.442: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9xb7" in namespace "provisioning-2625"
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:19.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3534" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:04.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Sep 14 19:26:05.280: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea" in namespace "security-context-test-9618" to be "Succeeded or Failed"
Sep 14 19:26:05.425: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 144.386471ms
Sep 14 19:26:07.587: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307039781s
Sep 14 19:26:09.733: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452236141s
Sep 14 19:26:11.877: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596230655s
Sep 14 19:26:14.022: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741448237s
Sep 14 19:26:16.169: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.888351433s
Sep 14 19:26:18.313: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Pending", Reason="", readiness=false. Elapsed: 13.032622289s
Sep 14 19:26:20.457: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.177090311s
Sep 14 19:26:20.458: INFO: Pod "alpine-nnp-true-686322fc-b6b5-45a4-a101-20fb2ed7e5ea" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:20.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9618" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:20.902: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:23.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3468" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":6,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:23.916: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:24.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":19,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":73,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:18.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:6.794 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Sep 14 19:25:45.833: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-360fjbz9
STEP: creating a claim
Sep 14 19:25:45.978: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-td9d
STEP: Creating a pod to test subpath
Sep 14 19:25:46.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-td9d" in namespace "provisioning-360" to be "Succeeded or Failed"
Sep 14 19:25:46.554: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.319443ms
Sep 14 19:25:48.699: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288043612s
Sep 14 19:25:50.843: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432499178s
Sep 14 19:25:52.988: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577062537s
Sep 14 19:25:55.135: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72433826s
Sep 14 19:25:57.280: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868957159s
Sep 14 19:25:59.435: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.02449761s
Sep 14 19:26:01.580: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169881669s
Sep 14 19:26:03.770: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.359466282s
Sep 14 19:26:05.973: INFO: Pod "pod-subpath-test-dynamicpv-td9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.562403949s
STEP: Saw pod success
Sep 14 19:26:05.973: INFO: Pod "pod-subpath-test-dynamicpv-td9d" satisfied condition "Succeeded or Failed"
Sep 14 19:26:06.168: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-td9d container test-container-volume-dynamicpv-td9d: <nil>
STEP: delete the pod
Sep 14 19:26:06.495: INFO: Waiting for pod pod-subpath-test-dynamicpv-td9d to disappear
Sep 14 19:26:06.638: INFO: Pod pod-subpath-test-dynamicpv-td9d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-td9d
Sep 14 19:26:06.638: INFO: Deleting pod "pod-subpath-test-dynamicpv-td9d" in namespace "provisioning-360"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":52,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:15.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":37,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Sep 14 19:26:29.879: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 12 lines ...
    Only supported for node OS distro [windows] (not debian)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:30
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:16.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 483 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:28.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 14 19:26:29.431: INFO: Waiting up to 5m0s for pod "downward-api-daf978bb-b7be-4b62-a17b-e734731628cc" in namespace "downward-api-2851" to be "Succeeded or Failed"
Sep 14 19:26:29.574: INFO: Pod "downward-api-daf978bb-b7be-4b62-a17b-e734731628cc": Phase="Pending", Reason="", readiness=false. Elapsed: 142.936182ms
Sep 14 19:26:31.718: INFO: Pod "downward-api-daf978bb-b7be-4b62-a17b-e734731628cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286950919s
STEP: Saw pod success
Sep 14 19:26:31.718: INFO: Pod "downward-api-daf978bb-b7be-4b62-a17b-e734731628cc" satisfied condition "Succeeded or Failed"
Sep 14 19:26:31.861: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod downward-api-daf978bb-b7be-4b62-a17b-e734731628cc container dapi-container: <nil>
STEP: delete the pod
Sep 14 19:26:32.155: INFO: Waiting for pod downward-api-daf978bb-b7be-4b62-a17b-e734731628cc to disappear
Sep 14 19:26:32.298: INFO: Pod downward-api-daf978bb-b7be-4b62-a17b-e734731628cc no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:32.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2851" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:32.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1397" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":5,"skipped":47,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:33.243: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 43 lines ...
• [SLOW TEST:150.617 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 14 19:26:23.844: INFO: PersistentVolumeClaim pvc-xvvbk found but phase is Pending instead of Bound.
Sep 14 19:26:25.992: INFO: PersistentVolumeClaim pvc-xvvbk found and phase=Bound (4.438000846s)
Sep 14 19:26:25.992: INFO: Waiting up to 3m0s for PersistentVolume local-w5hs8 to have phase Bound
Sep 14 19:26:26.137: INFO: PersistentVolume local-w5hs8 found and phase=Bound (145.192737ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-th5l
STEP: Creating a pod to test subpath
Sep 14 19:26:26.569: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-th5l" in namespace "provisioning-8059" to be "Succeeded or Failed"
Sep 14 19:26:26.713: INFO: Pod "pod-subpath-test-preprovisionedpv-th5l": Phase="Pending", Reason="", readiness=false. Elapsed: 144.554266ms
Sep 14 19:26:28.857: INFO: Pod "pod-subpath-test-preprovisionedpv-th5l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288409166s
Sep 14 19:26:31.001: INFO: Pod "pod-subpath-test-preprovisionedpv-th5l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431944143s
STEP: Saw pod success
Sep 14 19:26:31.001: INFO: Pod "pod-subpath-test-preprovisionedpv-th5l" satisfied condition "Succeeded or Failed"
Sep 14 19:26:31.144: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-th5l container test-container-subpath-preprovisionedpv-th5l: <nil>
STEP: delete the pod
Sep 14 19:26:31.439: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-th5l to disappear
Sep 14 19:26:31.582: INFO: Pod pod-subpath-test-preprovisionedpv-th5l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-th5l
Sep 14 19:26:31.583: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-th5l" in namespace "provisioning-8059"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:33.900: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
Sep 14 19:26:22.838: INFO: PersistentVolumeClaim pvc-b9rjc found but phase is Pending instead of Bound.
Sep 14 19:26:24.982: INFO: PersistentVolumeClaim pvc-b9rjc found and phase=Bound (10.864784602s)
Sep 14 19:26:24.982: INFO: Waiting up to 3m0s for PersistentVolume local-grfd8 to have phase Bound
Sep 14 19:26:25.125: INFO: PersistentVolume local-grfd8 found and phase=Bound (143.248282ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4ckp
STEP: Creating a pod to test subpath
Sep 14 19:26:25.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4ckp" in namespace "provisioning-1672" to be "Succeeded or Failed"
Sep 14 19:26:25.701: INFO: Pod "pod-subpath-test-preprovisionedpv-4ckp": Phase="Pending", Reason="", readiness=false. Elapsed: 143.404671ms
Sep 14 19:26:27.845: INFO: Pod "pod-subpath-test-preprovisionedpv-4ckp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287430611s
Sep 14 19:26:29.990: INFO: Pod "pod-subpath-test-preprovisionedpv-4ckp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432151093s
Sep 14 19:26:32.134: INFO: Pod "pod-subpath-test-preprovisionedpv-4ckp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576009805s
STEP: Saw pod success
Sep 14 19:26:32.134: INFO: Pod "pod-subpath-test-preprovisionedpv-4ckp" satisfied condition "Succeeded or Failed"
Sep 14 19:26:32.277: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-4ckp container test-container-subpath-preprovisionedpv-4ckp: <nil>
STEP: delete the pod
Sep 14 19:26:32.584: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4ckp to disappear
Sep 14 19:26:32.728: INFO: Pod pod-subpath-test-preprovisionedpv-4ckp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4ckp
Sep 14 19:26:32.728: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4ckp" in namespace "provisioning-1672"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:35.739: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
• [SLOW TEST:27.182 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":6,"skipped":24,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 33 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Sep 14 19:26:33.471: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4326" to be "Succeeded or Failed"
Sep 14 19:26:33.615: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.911023ms
Sep 14 19:26:35.788: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316379693s
Sep 14 19:26:37.935: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.463848611s
STEP: Saw pod success
Sep 14 19:26:37.935: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 14 19:26:38.079: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Sep 14 19:26:38.388: INFO: Waiting for pod pod-host-path-test to disappear
Sep 14 19:26:38.533: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.226 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:38.846: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 147 lines ...
• [SLOW TEST:9.104 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
• [SLOW TEST:61.304 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:44.159: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
• [SLOW TEST:12.832 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":43,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 14 19:26:44.928: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:26:45.072: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-h6f6
STEP: Creating a pod to test subpath
Sep 14 19:26:45.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-h6f6" in namespace "provisioning-6679" to be "Succeeded or Failed"
Sep 14 19:26:45.362: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 142.880346ms
Sep 14 19:26:47.506: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286365277s
Sep 14 19:26:49.650: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429965798s
Sep 14 19:26:51.794: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57470626s
Sep 14 19:26:53.938: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.71862295s
Sep 14 19:26:56.096: INFO: Pod "pod-subpath-test-inlinevolume-h6f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.876784135s
STEP: Saw pod success
Sep 14 19:26:56.096: INFO: Pod "pod-subpath-test-inlinevolume-h6f6" satisfied condition "Succeeded or Failed"
Sep 14 19:26:56.240: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-h6f6 container test-container-volume-inlinevolume-h6f6: <nil>
STEP: delete the pod
Sep 14 19:26:56.533: INFO: Waiting for pod pod-subpath-test-inlinevolume-h6f6 to disappear
Sep 14 19:26:56.675: INFO: Pod pod-subpath-test-inlinevolume-h6f6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-h6f6
Sep 14 19:26:56.675: INFO: Deleting pod "pod-subpath-test-inlinevolume-h6f6" in namespace "provisioning-6679"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":37,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:57.322: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
STEP: Listing all of the created validation webhooks
Sep 14 19:26:02.406: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:12.810: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:23.210: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:33.607: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:43.905: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:43.905: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001c4250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "webhook-7743".
STEP: Found 7 events.
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:44 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-78988fc6cd to 1
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:44 +0000 UTC - event for sample-webhook-deployment-78988fc6cd: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-78988fc6cd-67lc9
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:44 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-67lc9: {default-scheduler } Scheduled: Successfully assigned webhook-7743/sample-webhook-deployment-78988fc6cd-67lc9 to ip-172-20-48-74.sa-east-1.compute.internal
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:45 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-67lc9: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:46 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-67lc9: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:46 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-67lc9: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Created: Created container sample-webhook
Sep 14 19:26:44.049: INFO: At 2021-09-14 19:25:46 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-67lc9: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Started: Started container sample-webhook
Sep 14 19:26:44.192: INFO: POD                                         NODE                                        PHASE    GRACE  CONDITIONS
Sep 14 19:26:44.192: INFO: sample-webhook-deployment-78988fc6cd-67lc9  ip-172-20-48-74.sa-east-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-14 19:25:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-09-14 19:25:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-09-14 19:25:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-14 19:25:44 +0000 UTC  }]
Sep 14 19:26:44.192: INFO: 
... skipping 498 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:26:43.905: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001c4250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":3,"skipped":35,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:57.524: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":7,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:29.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-p84s
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:26:30.663: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p84s" in namespace "subpath-5754" to be "Succeeded or Failed"
Sep 14 19:26:30.806: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Pending", Reason="", readiness=false. Elapsed: 143.119533ms
Sep 14 19:26:32.950: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 2.286520351s
Sep 14 19:26:35.107: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 4.443391022s
Sep 14 19:26:37.261: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 6.597585197s
Sep 14 19:26:39.412: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 8.74898689s
Sep 14 19:26:41.556: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 10.892613404s
... skipping 2 lines ...
Sep 14 19:26:47.987: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 17.324315815s
Sep 14 19:26:50.132: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 19.46922057s
Sep 14 19:26:52.278: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 21.614400169s
Sep 14 19:26:54.421: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Running", Reason="", readiness=true. Elapsed: 23.75784669s
Sep 14 19:26:56.564: INFO: Pod "pod-subpath-test-configmap-p84s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.901138436s
STEP: Saw pod success
Sep 14 19:26:56.564: INFO: Pod "pod-subpath-test-configmap-p84s" satisfied condition "Succeeded or Failed"
Sep 14 19:26:56.707: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-configmap-p84s container test-container-subpath-configmap-p84s: <nil>
STEP: delete the pod
Sep 14 19:26:56.998: INFO: Waiting for pod pod-subpath-test-configmap-p84s to disappear
Sep 14 19:26:57.144: INFO: Pod pod-subpath-test-configmap-p84s no longer exists
STEP: Deleting pod pod-subpath-test-configmap-p84s
Sep 14 19:26:57.144: INFO: Deleting pod "pod-subpath-test-configmap-p84s" in namespace "subpath-5754"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:26:57.592: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 90 lines ...
Sep 14 19:26:50.906: INFO: Waiting for pod aws-client to disappear
Sep 14 19:26:51.050: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 14 19:26:51.050: INFO: Deleting PersistentVolumeClaim "pvc-zt6w7"
Sep 14 19:26:51.194: INFO: Deleting PersistentVolume "aws-nnss2"
Sep 14 19:26:51.700: INFO: Couldn't delete PD "aws://sa-east-1a/vol-04238b54189ee4dfa", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04238b54189ee4dfa is currently attached to i-05d276943891cd348
	status code: 400, request id: 73a95e8b-6ca2-40df-b053-53cb56d8ef2a
Sep 14 19:26:57.513: INFO: Successfully deleted PD "aws://sa-east-1a/vol-04238b54189ee4dfa".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:26:57.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4208" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:20.554 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:318
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":10,"skipped":39,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:30.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:01.894: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 126 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:58.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:26:59.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e" in namespace "downward-api-2984" to be "Succeeded or Failed"
Sep 14 19:26:59.714: INFO: Pod "downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 143.852796ms
Sep 14 19:27:01.858: INFO: Pod "downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287974404s
STEP: Saw pod success
Sep 14 19:27:01.858: INFO: Pod "downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e" satisfied condition "Succeeded or Failed"
Sep 14 19:27:02.002: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e container client-container: <nil>
STEP: delete the pod
Sep 14 19:27:02.297: INFO: Waiting for pod downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e to disappear
Sep 14 19:27:02.441: INFO: Pod downwardapi-volume-4b147922-61e9-415e-a5f6-c63e0fb0eb8e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:05.285: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 139 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-da8449fa-acff-402a-83e3-b9debe3d2387
STEP: Creating a pod to test consume secrets
Sep 14 19:26:58.427: INFO: Waiting up to 5m0s for pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1" in namespace "secrets-1352" to be "Succeeded or Failed"
Sep 14 19:26:58.570: INFO: Pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 142.902738ms
Sep 14 19:27:00.714: INFO: Pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286840198s
Sep 14 19:27:02.862: INFO: Pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435600658s
Sep 14 19:27:05.006: INFO: Pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579050135s
STEP: Saw pod success
Sep 14 19:27:05.006: INFO: Pod "pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1" satisfied condition "Succeeded or Failed"
Sep 14 19:27:05.149: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:27:05.442: INFO: Waiting for pod pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1 to disappear
Sep 14 19:27:05.588: INFO: Pod pod-secrets-3a899c68-77b8-474e-b986-4e388f122ba1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.457 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:05.895: INFO: Only supported for providers [gce gke] (not aws)
... skipping 70 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-ddf629d2-e40d-4719-ad57-2ecfd521e756
STEP: Creating a pod to test consume configMaps
Sep 14 19:26:58.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14" in namespace "projected-9708" to be "Succeeded or Failed"
Sep 14 19:26:58.750: INFO: Pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14": Phase="Pending", Reason="", readiness=false. Elapsed: 142.973588ms
Sep 14 19:27:00.896: INFO: Pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288953697s
Sep 14 19:27:03.041: INFO: Pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433940769s
Sep 14 19:27:05.186: INFO: Pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578524141s
STEP: Saw pod success
Sep 14 19:27:05.186: INFO: Pod "pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14" satisfied condition "Succeeded or Failed"
Sep 14 19:27:05.331: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14 container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:27:05.645: INFO: Waiting for pod pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14 to disappear
Sep 14 19:27:05.789: INFO: Pod pod-projected-configmaps-20edf5e7-4af8-484d-bad1-7ba46f5dab14 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.477 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":49,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:06.097: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":9,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:26:40.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Sep 14 19:26:41.185: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:26:41.475: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2663" in namespace "provisioning-2663" to be "Succeeded or Failed"
Sep 14 19:26:41.618: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 143.1348ms
Sep 14 19:26:43.762: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287155058s
Sep 14 19:26:45.906: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431571352s
Sep 14 19:26:48.057: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58236474s
Sep 14 19:26:50.203: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.728067295s
STEP: Saw pod success
Sep 14 19:26:50.203: INFO: Pod "hostpath-symlink-prep-provisioning-2663" satisfied condition "Succeeded or Failed"
Sep 14 19:26:50.203: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2663" in namespace "provisioning-2663"
Sep 14 19:26:50.351: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2663" to be fully deleted
Sep 14 19:26:50.494: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pcq4
STEP: Creating a pod to test subpath
Sep 14 19:26:50.641: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pcq4" in namespace "provisioning-2663" to be "Succeeded or Failed"
Sep 14 19:26:50.784: INFO: Pod "pod-subpath-test-inlinevolume-pcq4": Phase="Pending", Reason="", readiness=false. Elapsed: 143.143915ms
Sep 14 19:26:52.928: INFO: Pod "pod-subpath-test-inlinevolume-pcq4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287293296s
Sep 14 19:26:55.072: INFO: Pod "pod-subpath-test-inlinevolume-pcq4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431463456s
Sep 14 19:26:57.217: INFO: Pod "pod-subpath-test-inlinevolume-pcq4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576484046s
Sep 14 19:26:59.362: INFO: Pod "pod-subpath-test-inlinevolume-pcq4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.721178385s
STEP: Saw pod success
Sep 14 19:26:59.362: INFO: Pod "pod-subpath-test-inlinevolume-pcq4" satisfied condition "Succeeded or Failed"
Sep 14 19:26:59.505: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-pcq4 container test-container-subpath-inlinevolume-pcq4: <nil>
STEP: delete the pod
Sep 14 19:26:59.812: INFO: Waiting for pod pod-subpath-test-inlinevolume-pcq4 to disappear
Sep 14 19:26:59.955: INFO: Pod pod-subpath-test-inlinevolume-pcq4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pcq4
Sep 14 19:26:59.955: INFO: Deleting pod "pod-subpath-test-inlinevolume-pcq4" in namespace "provisioning-2663"
STEP: Deleting pod
Sep 14 19:27:00.098: INFO: Deleting pod "pod-subpath-test-inlinevolume-pcq4" in namespace "provisioning-2663"
Sep 14 19:27:00.385: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2663" in namespace "provisioning-2663" to be "Succeeded or Failed"
Sep 14 19:27:00.528: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 143.021796ms
Sep 14 19:27:02.673: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287928203s
Sep 14 19:27:04.820: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435018511s
Sep 14 19:27:06.964: INFO: Pod "hostpath-symlink-prep-provisioning-2663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578965333s
STEP: Saw pod success
Sep 14 19:27:06.964: INFO: Pod "hostpath-symlink-prep-provisioning-2663" satisfied condition "Succeeded or Failed"
Sep 14 19:27:06.964: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2663" in namespace "provisioning-2663"
Sep 14 19:27:07.112: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2663" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:27:07.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2663" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:07.561: INFO: Only supported for providers [openstack] (not aws)
... skipping 178 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:08.935: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":7,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:10.051: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating replication controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80
Sep 14 19:24:04.314: INFO: Pod name my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Found 1 pods out of 1
Sep 14 19:24:04.314: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80" are running
Sep 14 19:24:12.602: INFO: Pod "my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-14 19:24:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-14 19:24:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-14 19:24:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-14 19:24:04 +0000 UTC Reason: Message:}])
Sep 14 19:24:12.603: INFO: Trying to dial the pod
Sep 14 19:24:48.037: INFO: Controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Failed to GET from replica 1 [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp]: the server is currently unable to handle the request (get pods my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.41.171", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc003549c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a37d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a2368d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:25:23.034: INFO: Controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Failed to GET from replica 1 [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp]: the server is currently unable to handle the request (get pods my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.41.171", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc003549c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a37d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a2368d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:25:58.042: INFO: Controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Failed to GET from replica 1 [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp]: the server is currently unable to handle the request (get pods my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.41.171", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc003549c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a37d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a2368d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:26:33.036: INFO: Controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Failed to GET from replica 1 [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp]: the server is currently unable to handle the request (get pods my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.41.171", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc003549c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a37d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a2368d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:27:03.468: INFO: Controller my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80: Failed to GET from replica 1 [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp]: the server is currently unable to handle the request (get pods my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80-cclfp)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767244244, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.41.171", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc003549c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-de811c4f-5f85-4bd3-839e-2b16d49a2a80", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a37d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a2368d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:27:03.469: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002cd3380)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 234 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:27:03.469: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65
------------------------------
{"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":0,"skipped":4,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:10.217: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 55 lines ...
Sep 14 19:26:34.416: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-6392ptg6x
STEP: creating a claim
Sep 14 19:26:34.561: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-dwh6
STEP: Creating a pod to test exec-volume-test
Sep 14 19:26:35.013: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-dwh6" in namespace "volume-6392" to be "Succeeded or Failed"
Sep 14 19:26:35.159: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 146.377457ms
Sep 14 19:26:37.307: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294075894s
Sep 14 19:26:39.454: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441065323s
Sep 14 19:26:41.598: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585087952s
Sep 14 19:26:43.742: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72921955s
Sep 14 19:26:45.887: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.8737452s
Sep 14 19:26:48.032: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019382145s
Sep 14 19:26:50.176: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.163626697s
Sep 14 19:26:52.320: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.30687836s
Sep 14 19:26:54.464: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.451214288s
Sep 14 19:26:56.608: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.594991148s
Sep 14 19:26:58.752: INFO: Pod "exec-volume-test-dynamicpv-dwh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.738966964s
STEP: Saw pod success
Sep 14 19:26:58.752: INFO: Pod "exec-volume-test-dynamicpv-dwh6" satisfied condition "Succeeded or Failed"
Sep 14 19:26:58.896: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod exec-volume-test-dynamicpv-dwh6 container exec-container-dynamicpv-dwh6: <nil>
STEP: delete the pod
Sep 14 19:26:59.187: INFO: Waiting for pod exec-volume-test-dynamicpv-dwh6 to disappear
Sep 14 19:26:59.330: INFO: Pod exec-volume-test-dynamicpv-dwh6 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-dwh6
Sep 14 19:26:59.331: INFO: Deleting pod "exec-volume-test-dynamicpv-dwh6" in namespace "volume-6392"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
Sep 14 19:26:36.500: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-947gkr97
STEP: creating a claim
Sep 14 19:26:36.700: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rghk
STEP: Creating a pod to test subpath
Sep 14 19:26:37.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rghk" in namespace "provisioning-947" to be "Succeeded or Failed"
Sep 14 19:26:37.311: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 144.723937ms
Sep 14 19:26:39.458: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292048324s
Sep 14 19:26:41.603: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436768588s
Sep 14 19:26:43.747: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580832005s
Sep 14 19:26:45.892: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726012182s
Sep 14 19:26:48.037: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871002994s
Sep 14 19:26:50.181: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.015223157s
Sep 14 19:26:52.325: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.158973212s
Sep 14 19:26:54.469: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.30298205s
Sep 14 19:26:56.613: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.446953658s
Sep 14 19:26:58.758: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.591538584s
Sep 14 19:27:00.904: INFO: Pod "pod-subpath-test-dynamicpv-rghk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.737651512s
STEP: Saw pod success
Sep 14 19:27:00.904: INFO: Pod "pod-subpath-test-dynamicpv-rghk" satisfied condition "Succeeded or Failed"
Sep 14 19:27:01.047: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-rghk container test-container-subpath-dynamicpv-rghk: <nil>
STEP: delete the pod
Sep 14 19:27:01.340: INFO: Waiting for pod pod-subpath-test-dynamicpv-rghk to disappear
Sep 14 19:27:01.484: INFO: Pod pod-subpath-test-dynamicpv-rghk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rghk
Sep 14 19:27:01.484: INFO: Deleting pod "pod-subpath-test-dynamicpv-rghk" in namespace "provisioning-947"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:13.096: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 97 lines ...
Sep 14 19:26:36.608: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:37.550: INFO: Exec stderr: ""
Sep 14 19:26:40.012: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-3031"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-3031"/host; echo host > "/var/lib/kubelet/mount-propagation-3031"/host/file] Namespace:mount-propagation-3031 PodName:hostexec-ip-172-20-50-202.sa-east-1.compute.internal-pkkjm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 14 19:26:40.012: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:41.377: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3031 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:41.377: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:42.343: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:42.486: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3031 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:42.486: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:43.479: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:43.630: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3031 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:43.630: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:44.608: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Sep 14 19:26:44.751: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3031 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:44.751: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:45.702: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:45.845: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3031 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:45.845: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:46.858: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:47.001: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3031 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:47.001: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:47.940: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:48.087: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3031 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:48.087: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:49.115: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:49.258: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3031 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:49.259: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:50.422: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:50.565: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3031 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:50.565: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:51.506: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Sep 14 19:26:51.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3031 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:51.650: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:52.701: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:52.845: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3031 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:52.845: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:54.053: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Sep 14 19:26:54.196: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3031 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:54.196: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:55.202: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:55.345: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3031 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:55.346: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:56.384: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:56.527: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3031 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:56.527: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:57.490: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:26:57.634: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3031 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:57.634: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:58.589: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Sep 14 19:26:58.732: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3031 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:58.732: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:26:59.712: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Sep 14 19:26:59.857: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3031 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:26:59.857: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:00.803: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Sep 14 19:27:00.949: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3031 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:27:00.949: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:01.914: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:27:02.058: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3031 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:27:02.058: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:03.017: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 14 19:27:03.160: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3031 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:27:03.160: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:04.346: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Sep 14 19:27:04.346: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-3031"/master/file` = master] Namespace:mount-propagation-3031 PodName:hostexec-ip-172-20-50-202.sa-east-1.compute.internal-pkkjm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 14 19:27:04.346: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:05.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-3031"/slave/file] Namespace:mount-propagation-3031 PodName:hostexec-ip-172-20-50-202.sa-east-1.compute.internal-pkkjm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 14 19:27:05.348: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:27:06.316: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-3031"/host] Namespace:mount-propagation-3031 PodName:hostexec-ip-172-20-50-202.sa-east-1.compute.internal-pkkjm ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 14 19:27:06.316: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:75.857 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:16.494: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:23.474: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:10.160 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 269 lines ...
• [SLOW TEST:27.978 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":10,"skipped":54,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
I0914 19:24:47.117897    4857 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0914 19:24:50.118149    4857 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0914 19:24:53.118494    4857 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 14 19:24:53.551: INFO: Creating new exec pod
Sep 14 19:24:59.127: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:05.674: INFO: rc: 1
Sep 14 19:25:05.675: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:06.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:13.311: INFO: rc: 1
Sep 14 19:25:13.311: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:13.676: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:20.181: INFO: rc: 1
Sep 14 19:25:20.181: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:20.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:27.176: INFO: rc: 1
Sep 14 19:25:27.176: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:27.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:34.166: INFO: rc: 1
Sep 14 19:25:34.166: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:34.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:41.163: INFO: rc: 1
Sep 14 19:25:41.163: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:41.676: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:48.282: INFO: rc: 1
Sep 14 19:25:48.282: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:48.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:25:55.206: INFO: rc: 1
Sep 14 19:25:55.206: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:25:55.676: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:02.164: INFO: rc: 1
Sep 14 19:26:02.164: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:02.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:09.333: INFO: rc: 1
Sep 14 19:26:09.333: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:09.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:16.713: INFO: rc: 1
Sep 14 19:26:16.713: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:17.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:24.147: INFO: rc: 1
Sep 14 19:26:24.147: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:24.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:31.178: INFO: rc: 1
Sep 14 19:26:31.178: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:31.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:38.175: INFO: rc: 1
Sep 14 19:26:38.176: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:38.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:45.482: INFO: rc: 1
Sep 14 19:26:45.482: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:45.676: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:52.197: INFO: rc: 1
Sep 14 19:26:52.198: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + nc -v -t -w 2 affinity-nodeport 80
echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:52.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:26:59.227: INFO: rc: 1
Sep 14 19:26:59.227: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:26:59.675: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:27:06.152: INFO: rc: 1
Sep 14 19:27:06.152: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:27:06.152: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 14 19:27:12.650: INFO: rc: 1
Sep 14 19:27:12.650: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8858 exec execpod-affinityt7xlh -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:27:12.650: FAIL: Unexpected error:
    <*errors.errorString | 0xc0027dc390>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
occurred

... skipping 294 lines ...
• Failure [173.568 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:27:12.650: Unexpected error:
      <*errors.errorString | 0xc0027dc390>: {
          s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":27,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:36.552: INFO: Only supported for providers [gce gke] (not aws)
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:27:37.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-8428" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:38.275: INFO: Only supported for providers [azure] (not aws)
... skipping 201 lines ...
    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":11,"skipped":55,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:38.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
Sep 14 19:27:39.816: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.010 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 56 lines ...
• [SLOW TEST:26.763 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:40.236: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-301abaa2-5c4a-4bb6-b63b-eb1a914a7e00
STEP: Creating a pod to test consume secrets
Sep 14 19:27:38.170: INFO: Waiting up to 5m0s for pod "pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13" in namespace "secrets-7696" to be "Succeeded or Failed"
Sep 14 19:27:38.314: INFO: Pod "pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13": Phase="Pending", Reason="", readiness=false. Elapsed: 144.176647ms
Sep 14 19:27:40.460: INFO: Pod "pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289900171s
STEP: Saw pod success
Sep 14 19:27:40.460: INFO: Pod "pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13" satisfied condition "Succeeded or Failed"
Sep 14 19:27:40.606: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:27:40.904: INFO: Waiting for pod pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13 to disappear
Sep 14 19:27:41.078: INFO: Pod pod-secrets-d6c9767c-21b9-40c7-b9ee-e85a81d03b13 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:5.005 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":37,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4559-crds.webhook.example.com via the AdmissionRegistration API
Sep 14 19:26:45.928: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:26:56.317: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:27:06.621: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:27:16.922: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:27:27.213: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:27:27.214: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 513 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:27:27.214: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":9,"skipped":24,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:29.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:13.067 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":10,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 138 lines ...
Sep 14 19:27:36.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 14 19:27:36.972: INFO: Waiting up to 5m0s for pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c" in namespace "emptydir-2142" to be "Succeeded or Failed"
Sep 14 19:27:37.117: INFO: Pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 144.069891ms
Sep 14 19:27:39.263: INFO: Pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290915083s
Sep 14 19:27:41.426: INFO: Pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453856827s
Sep 14 19:27:43.571: INFO: Pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.598649925s
STEP: Saw pod success
Sep 14 19:27:43.571: INFO: Pod "pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c" satisfied condition "Succeeded or Failed"
Sep 14 19:27:43.715: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c container test-container: <nil>
STEP: delete the pod
Sep 14 19:27:44.011: INFO: Waiting for pod pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c to disappear
Sep 14 19:27:44.155: INFO: Pod pod-c6a85f79-b2d8-4c78-8b0c-78853d85c44c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.337 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:44.474: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":59,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:28.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-563a3c83-3c78-4b03-8897-63dcb0822b68
STEP: Creating a pod to test consume secrets
Sep 14 19:27:43.762: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48" in namespace "projected-9942" to be "Succeeded or Failed"
Sep 14 19:27:43.906: INFO: Pod "pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48": Phase="Pending", Reason="", readiness=false. Elapsed: 144.183232ms
Sep 14 19:27:46.051: INFO: Pod "pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288929556s
Sep 14 19:27:48.196: INFO: Pod "pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434469773s
STEP: Saw pod success
Sep 14 19:27:48.196: INFO: Pod "pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48" satisfied condition "Succeeded or Failed"
Sep 14 19:27:48.341: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:27:48.636: INFO: Waiting for pod pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48 to disappear
Sep 14 19:27:48.780: INFO: Pod pod-projected-secrets-94a005d5-eee3-4862-bc66-cde0acb95d48 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.322 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":62,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:39.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-763414de-c1f3-43b0-9b84-4c373f09d294
STEP: Creating a pod to test consume configMaps
Sep 14 19:27:40.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a" in namespace "projected-5520" to be "Succeeded or Failed"
Sep 14 19:27:41.078: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 213.670591ms
Sep 14 19:27:43.222: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357276183s
Sep 14 19:27:45.369: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.50395661s
Sep 14 19:27:47.513: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648015669s
Sep 14 19:27:49.656: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.791833809s
STEP: Saw pod success
Sep 14 19:27:49.657: INFO: Pod "pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a" satisfied condition "Succeeded or Failed"
Sep 14 19:27:49.800: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 14 19:27:50.092: INFO: Waiting for pod pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a to disappear
Sep 14 19:27:50.236: INFO: Pod pod-projected-configmaps-72389f06-ba5d-47fe-a90a-0657b8bb0b6a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.684 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:50.534: INFO: Driver emptydir doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:50.709: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:27:52.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7138" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":4,"skipped":66,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:27:52.525: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 14 19:27:41.212: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3" in namespace "security-context-test-6291" to be "Succeeded or Failed"
Sep 14 19:27:41.389: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 176.993446ms
Sep 14 19:27:43.534: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321550037s
Sep 14 19:27:45.678: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465190341s
Sep 14 19:27:47.822: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.609523824s
Sep 14 19:27:49.967: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.754687453s
Sep 14 19:27:52.111: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.898541189s
Sep 14 19:27:52.111: INFO: Pod "alpine-nnp-false-adba4c62-04d3-4108-8dea-996d223d5cc3" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:27:52.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6291" for this suite.


... skipping 81 lines ...
Sep 14 19:27:38.949: INFO: PersistentVolumeClaim pvc-488vt found but phase is Pending instead of Bound.
Sep 14 19:27:41.104: INFO: PersistentVolumeClaim pvc-488vt found and phase=Bound (10.879971643s)
Sep 14 19:27:41.104: INFO: Waiting up to 3m0s for PersistentVolume local-sh6vk to have phase Bound
Sep 14 19:27:41.269: INFO: PersistentVolume local-sh6vk found and phase=Bound (164.732883ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-p7mm
STEP: Creating a pod to test subpath
Sep 14 19:27:41.717: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p7mm" in namespace "provisioning-8330" to be "Succeeded or Failed"
Sep 14 19:27:41.861: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm": Phase="Pending", Reason="", readiness=false. Elapsed: 143.359488ms
Sep 14 19:27:44.007: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289348698s
Sep 14 19:27:46.153: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435926243s
Sep 14 19:27:48.298: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581042918s
Sep 14 19:27:50.444: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.726426093s
STEP: Saw pod success
Sep 14 19:27:50.444: INFO: Pod "pod-subpath-test-preprovisionedpv-p7mm" satisfied condition "Succeeded or Failed"
Sep 14 19:27:50.587: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-p7mm container test-container-subpath-preprovisionedpv-p7mm: <nil>
STEP: delete the pod
Sep 14 19:27:50.883: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p7mm to disappear
Sep 14 19:27:51.027: INFO: Pod pod-subpath-test-preprovisionedpv-p7mm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p7mm
Sep 14 19:27:51.027: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p7mm" in namespace "provisioning-8330"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":82,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 229 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":25,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:02.621: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
Sep 14 19:27:53.846: INFO: PersistentVolumeClaim pvc-7r8q7 found but phase is Pending instead of Bound.
Sep 14 19:27:55.990: INFO: PersistentVolumeClaim pvc-7r8q7 found and phase=Bound (13.008646784s)
Sep 14 19:27:55.990: INFO: Waiting up to 3m0s for PersistentVolume local-pbf6d to have phase Bound
Sep 14 19:27:56.134: INFO: PersistentVolume local-pbf6d found and phase=Bound (143.958535ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bj5q
STEP: Creating a pod to test subpath
Sep 14 19:27:56.565: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bj5q" in namespace "provisioning-3616" to be "Succeeded or Failed"
Sep 14 19:27:56.709: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q": Phase="Pending", Reason="", readiness=false. Elapsed: 143.505615ms
Sep 14 19:27:58.858: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2929706s
Sep 14 19:28:01.002: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437279742s
Sep 14 19:28:03.146: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581284549s
Sep 14 19:28:05.291: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725966525s
STEP: Saw pod success
Sep 14 19:28:05.291: INFO: Pod "pod-subpath-test-preprovisionedpv-bj5q" satisfied condition "Succeeded or Failed"
Sep 14 19:28:05.434: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-bj5q container test-container-volume-preprovisionedpv-bj5q: <nil>
STEP: delete the pod
Sep 14 19:28:05.731: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bj5q to disappear
Sep 14 19:28:05.874: INFO: Pod pod-subpath-test-preprovisionedpv-bj5q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bj5q
Sep 14 19:28:05.874: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bj5q" in namespace "provisioning-3616"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":92,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:07.867: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 81 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:57.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 58 lines ...
Sep 14 19:28:04.711: INFO: Pod aws-client still exists
Sep 14 19:28:06.561: INFO: Waiting for pod aws-client to disappear
Sep 14 19:28:06.704: INFO: Pod aws-client still exists
Sep 14 19:28:08.561: INFO: Waiting for pod aws-client to disappear
Sep 14 19:28:08.704: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 14 19:28:09.020: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0fbbf63621363f455", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fbbf63621363f455 is currently attached to i-0dd97304ea8ca0263
	status code: 400, request id: 1831ac24-c1ad-4bcd-9b78-42b6cd7237b8
Sep 14 19:28:14.837: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0fbbf63621363f455", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fbbf63621363f455 is currently attached to i-0dd97304ea8ca0263
	status code: 400, request id: 57f005b5-a5af-45cf-932b-6c121996d77a
Sep 14 19:28:20.694: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0fbbf63621363f455".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:20.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8096" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":11,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
Sep 14 19:27:53.326: INFO: PersistentVolumeClaim pvc-gt2mq found but phase is Pending instead of Bound.
Sep 14 19:27:55.471: INFO: PersistentVolumeClaim pvc-gt2mq found and phase=Bound (8.722777193s)
Sep 14 19:27:55.471: INFO: Waiting up to 3m0s for PersistentVolume local-cnm9f to have phase Bound
Sep 14 19:27:55.616: INFO: PersistentVolume local-cnm9f found and phase=Bound (145.20374ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-trjr
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:27:56.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-trjr" in namespace "provisioning-7766" to be "Succeeded or Failed"
Sep 14 19:27:56.199: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Pending", Reason="", readiness=false. Elapsed: 144.156728ms
Sep 14 19:27:58.344: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289728417s
Sep 14 19:28:00.489: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434834932s
Sep 14 19:28:02.634: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579807171s
Sep 14 19:28:04.781: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726538808s
Sep 14 19:28:06.926: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 10.871136484s
Sep 14 19:28:09.071: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 13.016238357s
Sep 14 19:28:11.216: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 15.161550125s
Sep 14 19:28:13.360: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 17.305817297s
Sep 14 19:28:15.506: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 19.45138727s
Sep 14 19:28:17.650: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Running", Reason="", readiness=true. Elapsed: 21.595732348s
Sep 14 19:28:19.796: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.741232195s
STEP: Saw pod success
Sep 14 19:28:19.796: INFO: Pod "pod-subpath-test-preprovisionedpv-trjr" satisfied condition "Succeeded or Failed"
Sep 14 19:28:19.941: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-trjr container test-container-subpath-preprovisionedpv-trjr: <nil>
STEP: delete the pod
Sep 14 19:28:20.246: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-trjr to disappear
Sep 14 19:28:20.390: INFO: Pod pod-subpath-test-preprovisionedpv-trjr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-trjr
Sep 14 19:28:20.390: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-trjr" in namespace "provisioning-7766"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:22.622: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":12,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:46.743: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Sep 14 19:27:53.383: INFO: PersistentVolumeClaim pvc-r4rrg found but phase is Pending instead of Bound.
Sep 14 19:27:55.527: INFO: PersistentVolumeClaim pvc-r4rrg found and phase=Bound (6.57535349s)
Sep 14 19:27:55.527: INFO: Waiting up to 3m0s for PersistentVolume aws-jgmjf to have phase Bound
Sep 14 19:27:55.670: INFO: PersistentVolume aws-jgmjf found and phase=Bound (142.923401ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-d2td
STEP: Creating a pod to test exec-volume-test
Sep 14 19:27:56.102: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-d2td" in namespace "volume-6778" to be "Succeeded or Failed"
Sep 14 19:27:56.246: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 143.251818ms
Sep 14 19:27:58.389: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287128033s
Sep 14 19:28:00.535: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432225158s
Sep 14 19:28:02.679: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576690087s
Sep 14 19:28:04.825: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722443734s
Sep 14 19:28:06.969: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86626296s
Sep 14 19:28:09.113: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 13.010470152s
Sep 14 19:28:11.258: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 15.155341744s
Sep 14 19:28:13.403: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Pending", Reason="", readiness=false. Elapsed: 17.300177587s
Sep 14 19:28:15.547: INFO: Pod "exec-volume-test-preprovisionedpv-d2td": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.444437265s
STEP: Saw pod success
Sep 14 19:28:15.547: INFO: Pod "exec-volume-test-preprovisionedpv-d2td" satisfied condition "Succeeded or Failed"
Sep 14 19:28:15.690: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-d2td container exec-container-preprovisionedpv-d2td: <nil>
STEP: delete the pod
Sep 14 19:28:15.987: INFO: Waiting for pod exec-volume-test-preprovisionedpv-d2td to disappear
Sep 14 19:28:16.130: INFO: Pod exec-volume-test-preprovisionedpv-d2td no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-d2td
Sep 14 19:28:16.130: INFO: Deleting pod "exec-volume-test-preprovisionedpv-d2td" in namespace "volume-6778"
STEP: Deleting pv and pvc
Sep 14 19:28:16.273: INFO: Deleting PersistentVolumeClaim "pvc-r4rrg"
Sep 14 19:28:16.417: INFO: Deleting PersistentVolume "aws-jgmjf"
Sep 14 19:28:16.892: INFO: Couldn't delete PD "aws://sa-east-1a/vol-09ad95b5ce6d0fdf7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09ad95b5ce6d0fdf7 is currently attached to i-08b1db505a6ff0626
	status code: 400, request id: 903106cb-148c-4367-8a9f-aa43c01001f9
Sep 14 19:28:22.958: INFO: Successfully deleted PD "aws://sa-east-1a/vol-09ad95b5ce6d0fdf7".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:22.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6778" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:23.278: INFO: Only supported for providers [gce gke] (not aws)
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:26.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-6194" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":12,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:26.922: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:9.096 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":9,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:27.356: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
Sep 14 19:28:29.778: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 14 19:28:29.778: INFO: stdout: "scheduler controller-manager etcd-1 etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Sep 14 19:28:29.778: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1754 get componentstatuses scheduler'
Sep 14 19:28:30.300: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 14 19:28:30.300: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Sep 14 19:28:30.300: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1754 get componentstatuses controller-manager'
Sep 14 19:28:31.173: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 14 19:28:31.173: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Sep 14 19:28:31.173: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1754 get componentstatuses etcd-1'
Sep 14 19:28:31.695: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 14 19:28:31.695: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-0
Sep 14 19:28:31.695: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1754 get componentstatuses etcd-0'
Sep 14 19:28:32.219: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep 14 19:28:32.220: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:32.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1754" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":10,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:32.532: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 174 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":9,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:34.987: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:36.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6007" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":10,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:36.699: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 73 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":10,"skipped":77,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:41.849: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Sep 14 19:27:42.569: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-979jcgrv
STEP: creating a claim
Sep 14 19:27:42.713: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hp8s
STEP: Creating a pod to test subpath
Sep 14 19:27:43.148: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hp8s" in namespace "provisioning-979" to be "Succeeded or Failed"
Sep 14 19:27:43.294: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 145.764952ms
Sep 14 19:27:45.438: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289636435s
Sep 14 19:27:47.583: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434628482s
Sep 14 19:27:49.728: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579599052s
Sep 14 19:27:51.872: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723857628s
Sep 14 19:27:54.017: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868860429s
... skipping 4 lines ...
Sep 14 19:28:04.744: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 21.595016245s
Sep 14 19:28:06.889: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 23.740067451s
Sep 14 19:28:09.033: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 25.884759204s
Sep 14 19:28:11.178: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Pending", Reason="", readiness=false. Elapsed: 28.029562108s
Sep 14 19:28:13.322: INFO: Pod "pod-subpath-test-dynamicpv-hp8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.173585716s
STEP: Saw pod success
Sep 14 19:28:13.322: INFO: Pod "pod-subpath-test-dynamicpv-hp8s" satisfied condition "Succeeded or Failed"
Sep 14 19:28:13.466: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-hp8s container test-container-subpath-dynamicpv-hp8s: <nil>
STEP: delete the pod
Sep 14 19:28:13.766: INFO: Waiting for pod pod-subpath-test-dynamicpv-hp8s to disappear
Sep 14 19:28:13.909: INFO: Pod pod-subpath-test-dynamicpv-hp8s no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hp8s
Sep 14 19:28:13.909: INFO: Deleting pod "pod-subpath-test-dynamicpv-hp8s" in namespace "provisioning-979"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":77,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 108 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Sep 14 19:26:20.617: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Sep 14 19:26:22.341: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Sep 14 19:26:22.341: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Sep 14 19:26:24.172: INFO: rc: 255
Sep 14 19:26:24.172: INFO: got err error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0914 19:26:23.963860     199 merged_client_builder.go:163] Using in-cluster namespace
I0914 19:26:23.964769     199 merged_client_builder.go:121] Using in-cluster configuration
I0914 19:26:23.974787     199 merged_client_builder.go:121] Using in-cluster configuration
I0914 19:26:23.987901     199 merged_client_builder.go:121] Using in-cluster configuration
I0914 19:26:23.988795     199 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-6726/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0914 19:26:23.996202     199 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc000024000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc00069eee0, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0009215e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0005685c0, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207dd80, 0xc00011c0d8, 0x1f07e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000352b00, 0xc0004abda0, 0x1, 0x3)
... skipping 72 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2054 +0x728

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Sep 14 19:26:24.172: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Sep 14 19:28:25.880: INFO: rc: 255
Sep 14 19:28:25.880: INFO: got err error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0914 19:26:25.629849     209 merged_client_builder.go:163] Using in-cluster namespace
I0914 19:26:40.650012     209 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15019 milliseconds
I0914 19:26:40.650092     209 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0914 19:27:10.651391     209 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0914 19:27:10.651469     209 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0914 19:27:10.651487     209 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0914 19:27:40.652458     209 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0914 19:27:40.652529     209 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0914 19:28:10.653271     209 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 30000 milliseconds
I0914 19:28:10.653344     209 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: i/o timeout
I0914 19:28:25.672900     209 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15019 milliseconds
I0914 19:28:25.673440     209 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0914 19:28:25.673741     209 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0914 19:28:25.673912     209 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc00058c000, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc00070d110, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005502e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000048180, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207d0e0, 0xc0004a8780, 0x1f07e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0002a42c0, 0xc00036db30, 0x1, 0x3)
... skipping 24 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Sep 14 19:28:25.880: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6726 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Sep 14 19:28:27.561: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Sep 14 19:28:27.561: INFO: stdout: "I0914 19:28:27.472306     221 merged_client_builder.go:121] Using in-cluster configuration\nI0914 19:28:27.476275     221 merged_client_builder.go:121] Using in-cluster configuration\nI0914 19:28:27.480163     221 merged_client_builder.go:121] Using in-cluster configuration\nI0914 19:28:27.486656     221 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 6 milliseconds\nNo resources found in invalid namespace.\n"
Sep 14 19:28:27.561: INFO: stdout: I0914 19:28:27.472306     221 merged_client_builder.go:121] Using in-cluster configuration
... skipping 74 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":9,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:41.952: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
• [SLOW TEST:90.998 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:02.744: INFO: >>> kubeConfig: /root/.kube/config
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":34,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:44.024: INFO: Only supported for providers [vsphere] (not aws)
... skipping 35 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":13,"skipped":88,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:28:29.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:44.794: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:46.145: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:46.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-8348" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":15,"skipped":91,"failed":0}
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:28:46.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
Sep 14 19:28:08.671: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 14 19:28:09.739: INFO: Successfully created a new PD: "aws://sa-east-1a/vol-018e52432441eaf97".
Sep 14 19:28:09.739: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-qqqq
STEP: Creating a pod to test exec-volume-test
Sep 14 19:28:09.885: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-qqqq" in namespace "volume-369" to be "Succeeded or Failed"
Sep 14 19:28:10.028: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 143.031001ms
Sep 14 19:28:12.172: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286827182s
Sep 14 19:28:14.317: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431457243s
Sep 14 19:28:16.462: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576428839s
Sep 14 19:28:18.609: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723676937s
Sep 14 19:28:20.753: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868027995s
Sep 14 19:28:22.899: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.014361346s
Sep 14 19:28:25.044: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.158568618s
Sep 14 19:28:27.188: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.302561577s
Sep 14 19:28:29.331: INFO: Pod "exec-volume-test-inlinevolume-qqqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.446067753s
STEP: Saw pod success
Sep 14 19:28:29.331: INFO: Pod "exec-volume-test-inlinevolume-qqqq" satisfied condition "Succeeded or Failed"
Sep 14 19:28:29.474: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod exec-volume-test-inlinevolume-qqqq container exec-container-inlinevolume-qqqq: <nil>
STEP: delete the pod
Sep 14 19:28:29.772: INFO: Waiting for pod exec-volume-test-inlinevolume-qqqq to disappear
Sep 14 19:28:29.916: INFO: Pod exec-volume-test-inlinevolume-qqqq no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-qqqq
Sep 14 19:28:29.916: INFO: Deleting pod "exec-volume-test-inlinevolume-qqqq" in namespace "volume-369"
Sep 14 19:28:30.353: INFO: Couldn't delete PD "aws://sa-east-1a/vol-018e52432441eaf97", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-018e52432441eaf97 is currently attached to i-0dd97304ea8ca0263
	status code: 400, request id: 0ae9a2b6-fa55-4ff9-9ada-c0a77ea97c4e
Sep 14 19:28:36.198: INFO: Couldn't delete PD "aws://sa-east-1a/vol-018e52432441eaf97", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-018e52432441eaf97 is currently attached to i-0dd97304ea8ca0263
	status code: 400, request id: b197f01f-3af9-46c9-aa27-3287b42b86d4
Sep 14 19:28:42.034: INFO: Couldn't delete PD "aws://sa-east-1a/vol-018e52432441eaf97", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-018e52432441eaf97 is currently attached to i-0dd97304ea8ca0263
	status code: 400, request id: 6c6b67f2-f05a-4adf-9c06-2d105d5ee68c
Sep 14 19:28:47.862: INFO: Successfully deleted PD "aws://sa-east-1a/vol-018e52432441eaf97".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:47.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-369" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":12,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:48.166: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
• [SLOW TEST:26.241 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":12,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:28:46.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-04476a4b-c97e-4e3b-8bbb-3fedd76b6fd3
STEP: Creating a pod to test consume configMaps
Sep 14 19:28:47.188: INFO: Waiting up to 5m0s for pod "pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb" in namespace "configmap-8402" to be "Succeeded or Failed"
Sep 14 19:28:47.331: INFO: Pod "pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 142.810499ms
Sep 14 19:28:49.475: INFO: Pod "pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286557919s
STEP: Saw pod success
Sep 14 19:28:49.475: INFO: Pod "pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb" satisfied condition "Succeeded or Failed"
Sep 14 19:28:49.617: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb container agnhost-container: <nil>
STEP: delete the pod
Sep 14 19:28:49.924: INFO: Waiting for pod pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb to disappear
Sep 14 19:28:50.066: INFO: Pod pod-configmaps-d316a118-821c-449b-86d8-fd56015aa8fb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":79,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:50.947: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Sep 14 19:28:01.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Sep 14 19:28:02.281: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:28:02.575: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2143" in namespace "volume-2143" to be "Succeeded or Failed"
Sep 14 19:28:02.718: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Pending", Reason="", readiness=false. Elapsed: 142.899603ms
Sep 14 19:28:04.862: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28681401s
Sep 14 19:28:07.005: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430251212s
STEP: Saw pod success
Sep 14 19:28:07.006: INFO: Pod "hostpath-symlink-prep-volume-2143" satisfied condition "Succeeded or Failed"
Sep 14 19:28:07.006: INFO: Deleting pod "hostpath-symlink-prep-volume-2143" in namespace "volume-2143"
Sep 14 19:28:07.154: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2143" to be fully deleted
Sep 14 19:28:07.297: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Sep 14 19:28:11.728: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-2143 exec hostpathsymlink-injector --namespace=volume-2143 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-2143' > /opt/0/index.html'
... skipping 46 lines ...
Sep 14 19:28:45.386: INFO: Pod hostpathsymlink-client still exists
Sep 14 19:28:47.240: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep 14 19:28:47.387: INFO: Pod hostpathsymlink-client still exists
Sep 14 19:28:49.240: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep 14 19:28:49.383: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Sep 14 19:28:49.529: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-2143" in namespace "volume-2143" to be "Succeeded or Failed"
Sep 14 19:28:49.672: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Pending", Reason="", readiness=false. Elapsed: 142.985358ms
Sep 14 19:28:51.816: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286855299s
Sep 14 19:28:53.962: INFO: Pod "hostpath-symlink-prep-volume-2143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432794281s
STEP: Saw pod success
Sep 14 19:28:53.962: INFO: Pod "hostpath-symlink-prep-volume-2143" satisfied condition "Succeeded or Failed"
Sep 14 19:28:53.962: INFO: Deleting pod "hostpath-symlink-prep-volume-2143" in namespace "volume-2143"
Sep 14 19:28:54.113: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-2143" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:54.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2143" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":64,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:54.566: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:54.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-6552" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":6,"skipped":80,"failed":1,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:54.916: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 153 lines ...
• [SLOW TEST:73.063 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:28:57.572: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
Sep 14 19:28:48.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 14 19:28:48.917: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:28:49.210: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6629" in namespace "provisioning-6629" to be "Succeeded or Failed"
Sep 14 19:28:49.353: INFO: Pod "hostpath-symlink-prep-provisioning-6629": Phase="Pending", Reason="", readiness=false. Elapsed: 143.189179ms
Sep 14 19:28:51.497: INFO: Pod "hostpath-symlink-prep-provisioning-6629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287165337s
STEP: Saw pod success
Sep 14 19:28:51.497: INFO: Pod "hostpath-symlink-prep-provisioning-6629" satisfied condition "Succeeded or Failed"
Sep 14 19:28:51.497: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6629" in namespace "provisioning-6629"
Sep 14 19:28:51.645: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6629" to be fully deleted
Sep 14 19:28:51.788: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-r2fz
STEP: Creating a pod to test subpath
Sep 14 19:28:51.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-r2fz" in namespace "provisioning-6629" to be "Succeeded or Failed"
Sep 14 19:28:52.078: INFO: Pod "pod-subpath-test-inlinevolume-r2fz": Phase="Pending", Reason="", readiness=false. Elapsed: 143.696875ms
Sep 14 19:28:54.222: INFO: Pod "pod-subpath-test-inlinevolume-r2fz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287390828s
Sep 14 19:28:56.368: INFO: Pod "pod-subpath-test-inlinevolume-r2fz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432917026s
STEP: Saw pod success
Sep 14 19:28:56.368: INFO: Pod "pod-subpath-test-inlinevolume-r2fz" satisfied condition "Succeeded or Failed"
Sep 14 19:28:56.511: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-r2fz container test-container-volume-inlinevolume-r2fz: <nil>
STEP: delete the pod
Sep 14 19:28:56.810: INFO: Waiting for pod pod-subpath-test-inlinevolume-r2fz to disappear
Sep 14 19:28:56.954: INFO: Pod pod-subpath-test-inlinevolume-r2fz no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-r2fz
Sep 14 19:28:56.954: INFO: Deleting pod "pod-subpath-test-inlinevolume-r2fz" in namespace "provisioning-6629"
STEP: Deleting pod
Sep 14 19:28:57.097: INFO: Deleting pod "pod-subpath-test-inlinevolume-r2fz" in namespace "provisioning-6629"
Sep 14 19:28:57.384: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6629" in namespace "provisioning-6629" to be "Succeeded or Failed"
Sep 14 19:28:57.528: INFO: Pod "hostpath-symlink-prep-provisioning-6629": Phase="Pending", Reason="", readiness=false. Elapsed: 143.566662ms
Sep 14 19:28:59.674: INFO: Pod "hostpath-symlink-prep-provisioning-6629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289616081s
STEP: Saw pod success
Sep 14 19:28:59.674: INFO: Pod "hostpath-symlink-prep-provisioning-6629" satisfied condition "Succeeded or Failed"
Sep 14 19:28:59.674: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6629" in namespace "provisioning-6629"
Sep 14 19:28:59.820: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6629" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:59.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6629" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":13,"skipped":112,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:28:50.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 14 19:28:51.078: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:28:51.370: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7169" in namespace "provisioning-7169" to be "Succeeded or Failed"
Sep 14 19:28:51.513: INFO: Pod "hostpath-symlink-prep-provisioning-7169": Phase="Pending", Reason="", readiness=false. Elapsed: 142.477407ms
Sep 14 19:28:53.656: INFO: Pod "hostpath-symlink-prep-provisioning-7169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.285631791s
STEP: Saw pod success
Sep 14 19:28:53.656: INFO: Pod "hostpath-symlink-prep-provisioning-7169" satisfied condition "Succeeded or Failed"
Sep 14 19:28:53.656: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7169" in namespace "provisioning-7169"
Sep 14 19:28:53.821: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7169" to be fully deleted
Sep 14 19:28:53.965: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fgsf
STEP: Creating a pod to test subpath
Sep 14 19:28:54.113: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fgsf" in namespace "provisioning-7169" to be "Succeeded or Failed"
Sep 14 19:28:54.256: INFO: Pod "pod-subpath-test-inlinevolume-fgsf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.534475ms
Sep 14 19:28:56.399: INFO: Pod "pod-subpath-test-inlinevolume-fgsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286476339s
STEP: Saw pod success
Sep 14 19:28:56.399: INFO: Pod "pod-subpath-test-inlinevolume-fgsf" satisfied condition "Succeeded or Failed"
Sep 14 19:28:56.542: INFO: Trying to get logs from node ip-172-20-48-74.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-fgsf container test-container-volume-inlinevolume-fgsf: <nil>
STEP: delete the pod
Sep 14 19:28:56.832: INFO: Waiting for pod pod-subpath-test-inlinevolume-fgsf to disappear
Sep 14 19:28:56.982: INFO: Pod pod-subpath-test-inlinevolume-fgsf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fgsf
Sep 14 19:28:56.982: INFO: Deleting pod "pod-subpath-test-inlinevolume-fgsf" in namespace "provisioning-7169"
STEP: Deleting pod
Sep 14 19:28:57.124: INFO: Deleting pod "pod-subpath-test-inlinevolume-fgsf" in namespace "provisioning-7169"
Sep 14 19:28:57.414: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7169" in namespace "provisioning-7169" to be "Succeeded or Failed"
Sep 14 19:28:57.557: INFO: Pod "hostpath-symlink-prep-provisioning-7169": Phase="Pending", Reason="", readiness=false. Elapsed: 142.551798ms
Sep 14 19:28:59.700: INFO: Pod "hostpath-symlink-prep-provisioning-7169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.285769005s
STEP: Saw pod success
Sep 14 19:28:59.700: INFO: Pod "hostpath-symlink-prep-provisioning-7169" satisfied condition "Succeeded or Failed"
Sep 14 19:28:59.700: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7169" in namespace "provisioning-7169"
Sep 14 19:28:59.846: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7169" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:28:59.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7169" for this suite.
... skipping 8 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":11,"skipped":65,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:29:00.308: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 14 19:28:58.497: INFO: Waiting up to 5m0s for pod "pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78" in namespace "emptydir-7316" to be "Succeeded or Failed"
Sep 14 19:28:58.649: INFO: Pod "pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78": Phase="Pending", Reason="", readiness=false. Elapsed: 151.725394ms
Sep 14 19:29:00.795: INFO: Pod "pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.297807746s
STEP: Saw pod success
Sep 14 19:29:00.795: INFO: Pod "pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78" satisfied condition "Succeeded or Failed"
Sep 14 19:29:00.939: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78 container test-container: <nil>
STEP: delete the pod
Sep 14 19:29:01.257: INFO: Waiting for pod pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78 to disappear
Sep 14 19:29:01.401: INFO: Pod pod-5e8b88e8-1d67-446b-b13a-4feb5a012c78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:29:01.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7316" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:29:01.720: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 147 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:27:52.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
STEP: Registering slow webhook via the AdmissionRegistration API
Sep 14 19:28:10.797: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:28:21.186: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:28:31.488: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:28:41.787: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:28:52.076: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:28:52.077: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc00023c250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 489 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:28:52.077: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc00023c250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":7,"skipped":52,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:29:07.026: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 14 19:29:04.707: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e1f0a92f-3e91-4ca3-ad4d-aaa18683d5ab" in namespace "security-context-test-8409" to be "Succeeded or Failed"
Sep 14 19:29:04.851: INFO: Pod "busybox-privileged-false-e1f0a92f-3e91-4ca3-ad4d-aaa18683d5ab": Phase="Pending", Reason="", readiness=false. Elapsed: 143.729863ms
Sep 14 19:29:06.995: INFO: Pod "busybox-privileged-false-e1f0a92f-3e91-4ca3-ad4d-aaa18683d5ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287748985s
Sep 14 19:29:06.995: INFO: Pod "busybox-privileged-false-e1f0a92f-3e91-4ca3-ad4d-aaa18683d5ab" satisfied condition "Succeeded or Failed"
Sep 14 19:29:07.140: INFO: Got logs for pod "busybox-privileged-false-e1f0a92f-3e91-4ca3-ad4d-aaa18683d5ab": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:29:07.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8409" for this suite.

... skipping 23 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":11,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:29:01.163: INFO: Waiting up to 5m0s for pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6" in namespace "projected-6495" to be "Succeeded or Failed"
Sep 14 19:29:01.307: INFO: Pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.434663ms
Sep 14 19:29:03.452: INFO: Pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288418638s
Sep 14 19:29:05.596: INFO: Pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6": Phase="Running", Reason="", readiness=true. Elapsed: 4.432699959s
Sep 14 19:29:07.740: INFO: Pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576478053s
STEP: Saw pod success
Sep 14 19:29:07.740: INFO: Pod "metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6" satisfied condition "Succeeded or Failed"
Sep 14 19:29:07.884: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6 container client-container: <nil>
STEP: delete the pod
Sep 14 19:29:08.183: INFO: Waiting for pod metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6 to disappear
Sep 14 19:29:08.337: INFO: Pod metadata-volume-c975c5c4-02f6-4ccd-9dd9-16c945376ca6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.327 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":119,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:29:08.651: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 195 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":13,"skipped":81,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:29:07.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3" in namespace "projected-3627" to be "Succeeded or Failed"
Sep 14 19:29:08.053: INFO: Pod "downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.431574ms
Sep 14 19:29:10.197: INFO: Pod "downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287568471s
STEP: Saw pod success
Sep 14 19:29:10.198: INFO: Pod "downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3" satisfied condition "Succeeded or Failed"
Sep 14 19:29:10.342: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3 container client-container: <nil>
STEP: delete the pod
Sep 14 19:29:10.634: INFO: Waiting for pod downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3 to disappear
Sep 14 19:29:10.778: INFO: Pod downwardapi-volume-131b4511-7702-4fe3-ba0d-5e9cc0bef6a3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:29:10.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3627" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":55,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:29:11.081: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63329 lines ...






\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:30.755242       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4200-6621/csi-hostpathplugin-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:30.891039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4200-6621/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:31.036695       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4200-6621/csi-hostpath-resizer-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:31.184580       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4200-6621/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:31.602146       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4200/inline-volume-tester-bmqtr\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-bmqtr-my-volume-0\\\" not found.\"\nI0914 19:46:35.457574       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4200/inline-volume-tester-bmqtr\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:37.342860       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8481/pod-submit-status-2-13\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:46:38.017924       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461-2227/csi-hostpath-attacher-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:38.459942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461-2227/csi-hostpathplugin-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:38.602062       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461-2227/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:38.742141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461-2227/csi-hostpath-resizer-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:38.888175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461-2227/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:39.553373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8481/pod-submit-status-2-14\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:46:39.998518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4249/aws-injector\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:46:41.230429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2281/pod-subpath-test-preprovisionedpv-vq2w\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:42.805920       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-8762/pvc-tester-f89mr\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protectioncxnr7\\\" is being deleted.\"\nI0914 19:46:44.168920       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461/pod-subpath-test-dynamicpv-2dxg\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:47.613245       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1538/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-twwr9\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:50.898121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3536/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-4728f\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:52.007312       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1538/pod-7d4e4a5f-abf4-45ef-ac61-fcd31435c9ac\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:56.781192       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1538/pod-8eac9678-a21c-4bfa-aa60-869abd6d04db\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:46:57.273169       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3536/pod-subpath-test-preprovisionedpv-nmtv\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:00.490227       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-3599/deployment-55649fd747-q82vw\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:00.505357       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-3599/deployment-55649fd747-psft5\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:00.505631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-3599/deployment-55649fd747-hkc4r\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:00.632261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-3599/deployment-55649fd747-f698n\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:00.650259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-3599/deployment-55649fd747-hhtvn\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:03.069441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2602/pod1\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:03.216325       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2602/pod2\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:03.361018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2602/pod3\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:04.929710       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9497/hostexec-ip-172-20-50-202.sa-east-1.compute.internal-w89mb\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:05.313449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8294/ss-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:06.624796       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6461/pod-subpath-test-dynamicpv-2dxg\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:09.172490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4249/aws-client\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:09.203350       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9497/pod-2fabc926-91da-4f8f-a8a4-6c5ac262cade\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:10.375627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8090/pod-subpath-test-inlinevolume-wvqx\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:13.845167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9497/pod-a2e55c90-c4d9-4716-a02b-0899619b6b8a\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:16.175631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-8294/ss-1\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:16.853583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9840/busybox-user-0-52431d42-83eb-450e-aaa3-7f3f60fa6d0c\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:18.176519       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-5110/busybox-readonly-true-c44ed423-aa52-46c0-8e53-82f501c10304\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:20.332585       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-8091/test-pod-15733a2f-6178-4b3d-94cd-260063101735\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:21.638835       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-9397/downwardapi-volume-31dd4256-742b-45a2-8e7f-21ce641db798\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:24.412867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-2069/downward-api-9b1167ee-dbcd-49cc-a873-d60e5ce6f0a4\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:25.850210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3183/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-v8pgg\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:27.367488       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8481/pod-submit-status-1-12\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:27.451080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1595/httpd\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:28.622074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2257/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-pqpvg\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:37.325194       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8481/pod-submit-status-1-13\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:40.926056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3183/pod-subpath-test-preprovisionedpv-dn42\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:41.281077       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2257/pod-subpath-test-preprovisionedpv-8b2k\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:43.760828       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3741/busybox-ba948a20-9e5c-4b80-90d2-c07b148c4f6b\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:43.826783       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2145/hostexec-ip-172-20-50-202.sa-east-1.compute.internal-xg42s\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:45.197138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-179/pod-0\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:45.342177       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-179/pod-1\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:45.486553       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-179/pod-2\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:46.587003       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2257/pod-subpath-test-preprovisionedpv-8b2k\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:47.400235       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-8481/pod-submit-status-1-14\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:51.831701       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4903/hostexec-ip-172-20-48-74.sa-east-1.compute.internal-l2kwb\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:52.754020       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6191/busybox-edd806d0-cb7e-4916-aa69-b4212eadd8f7\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:54.405381       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7709/exec-volume-test-inlinevolume-p7vn\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:54.666438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-4185/downwardapi-volume-1ec949aa-6135-493f-aab2-fb072de93016\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:55.514209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2145/pod-subpath-test-preprovisionedpv-txc5\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:55.734947       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4833/pod-subpath-test-inlinevolume-m7mg\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:56.117581       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4903/pod-f357691b-ed6e-4441-bbc1-b1b0774da6ec\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:56.209793       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1814/affinity-nodeport-transition-fqbv9\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:56.225277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1814/affinity-nodeport-transition-gddjh\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:56.235177       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1814/affinity-nodeport-transition-bn9ws\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:56.325395       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8814/hostexec-ip-172-20-48-74.sa-east-1.compute.internal-54kr7\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:47:57.387990       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-7355/startup-69a96df7-7bac-4e28-a0f3-ad8ef5463e7e\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:58.724747       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-821/pod-f1a621da-5c0c-4f6f-8b25-1ee9ef8ff75b\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:47:59.934699       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1814/execpod-affinity85cln\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:00.893295       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4903/pod-67174e3a-9f93-47d6-8b12-1766d4c52d99\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:02.795884       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-1528/var-expansion-c8ff5745-2ab5-4ac5-b59a-f998ee9e4b2f\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:03.251404       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3452/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-qfrp9\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:04.390120       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7090/terminate-cmd-rpa43429f24-6a60-4374-9c04-901ff6803a76\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:07.337888       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5600/hostexec-ip-172-20-50-202.sa-east-1.compute.internal-lvn84\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:07.500990       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3452/pod-e940b878-15eb-4516-bec3-2bf8bdb4d07f\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:07.632822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2918/pod-projected-secrets-604adb40-b76a-4e52-b85a-a1a699f72b36\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:09.436262       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1067/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-2vs5d\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:09.632900       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-6037/ss-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:12.212805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8494/e2e-test-httpd-pod\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:14.822552       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-5775/test-pod-954f79fb-882c-4127-b31f-9019f8b9d9c3\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:17.835575       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-5775/test-pod-954f79fb-882c-4127-b31f-9019f8b9d9c3\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:21.367724       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-1275/pod-handle-http-request\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:22.992899       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-5775/test-pod-954f79fb-882c-4127-b31f-9019f8b9d9c3\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:23.727780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7090/terminate-cmd-rpof2d2ed2bb-7c81-4852-80cb-8408c6ed2e26\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:25.633109       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5600/pod-subpath-test-preprovisionedpv-5p7w\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:25.942825       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-1275/pod-with-poststart-http-hook\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:26.562990       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1067/pod-511210c4-ae79-45ad-acc0-24d447a56a37\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:26.778584       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1170/hostexec-ip-172-20-48-74.sa-east-1.compute.internal-twtjj\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:27.698165       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7327/hostexec-ip-172-20-50-202.sa-east-1.compute.internal-b7cxc\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:29.060355       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8945-9774/csi-mockplugin-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:29.283212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-1067/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-57dt7\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:29.349535       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8945-9774/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:30.290508       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-5775/test-pod-954f79fb-882c-4127-b31f-9019f8b9d9c3\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:31.744111       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-6037/ss-1\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:31.989207       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-1880/sample-webhook-deployment-78988fc6cd-w4jj4\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:32.747141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7090/terminate-cmd-rpnab1eb2dd-c19a-4214-8fef-518cff1dd039\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:35.013597       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-5316/httpd\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:38.855642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2562/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-pdfrs\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:39.798398       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-5077/pod1\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:39.944266       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-5077/pod2\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:40.709126       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7504/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-pdb8g\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:41.680154       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-8945/pvc-volume-tester-sgvf7\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:41.702939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-3377/image-pull-test0aa03856-b9ce-42a8-aa35-ab04d4074d53\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:41.938779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1170/pod-subpath-test-preprovisionedpv-gzkh\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:42.044216       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-9982/security-context-3b7e7b5d-226b-4d25-b25f-36df39f24686\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:42.253366       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7327/pod-subpath-test-preprovisionedpv-zrfw\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:46.541288       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7606/pod-projected-configmaps-99a1b619-637b-4e50-b762-3e3afebe8199\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:47.668454       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3709/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-kd6gk\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:48.436256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/netserver-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:48.581509       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/netserver-1\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:48.725474       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/netserver-2\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:48.872094       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/netserver-3\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:50.713191       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"crd-webhook-9809/sample-crd-conversion-webhook-deployment-697cdbd8f4-ksktz\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:53.857921       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-6037/ss-2\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:48:56.239067       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3709/pod-subpath-test-preprovisionedpv-htrp\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:56.274048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2562/pod-d1f74a3f-e062-42ff-a62e-c1429625a359\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:48:58.995652       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2562/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-rkrps\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:00.129127       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9117/failed-jobs-history-limit-27194149-jqhr4\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:01.470834       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7915/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-gtpjv\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:04.250649       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-5579/security-context-6775ffe5-7a6c-4c27-9a4b-5cf33451b0ca\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:08.598665       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/netserver-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:08.741445       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/netserver-1\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:08.888289       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/netserver-2\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:09.051422       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/netserver-3\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:10.310385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/test-container-pod\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:10.429656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5226/pod-b7058a04-c1be-46e2-84c2-98b353ecda7c\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:10.455624       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-8990/host-test-container-pod\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:11.984867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"tables-9977/pod-1\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:12.265525       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7915/local-injector\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:13.147798       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5226/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-2w54j\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:13.407601       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5468/pod-f20ec734-b133-40ed-96f8-e159ff6d9af3\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:13.858179       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5423/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-4fnw6\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:17.450643       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-5730/pod-0cc7b881-7155-4f85-aa1d-ee6455033cea\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:22.854139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-6381/pod-secrets-c6d067a3-697e-4fc0-aed8-617055708587\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:22.900583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6214/affinity-clusterip-transition-tjrrn\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:22.911572       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6214/affinity-clusterip-transition-nfh6d\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:22.918442       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6214/affinity-clusterip-transition-whmdd\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:24.492716       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7688/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-dbc45\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:25.537210       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5423/pod-beb60afe-e687-45c2-b495-b9774f34cf48\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:26.474782       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6214/execpod-affinityn2s7j\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:27.107704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4956/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-6wgl9\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:28.011573       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7915/local-client\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:28.286950       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-5423/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-b6xbp\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:28.962656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"clientset-6798/podb971c8c1-08f5-429b-83b7-30f62b4599d4\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:30.503327       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/test-container-pod\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:30.648104       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-6103/host-test-container-pod\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:34.298449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7977/hostexec-ip-172-20-50-202.sa-east-1.compute.internal-ck4bd\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:34.790927       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-7233/server\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:37.371202       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-7233/tester\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:40.733781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7977/pod-subpath-test-preprovisionedpv-wgmm\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:41.684214       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7688/pod-subpath-test-preprovisionedpv-bqhz\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:42.077041       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4956/pod-subpath-test-preprovisionedpv-g2xw\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:45.967490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8889/pod-subpath-test-inlinevolume-s2qq\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:46.038181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7977/pod-subpath-test-preprovisionedpv-wgmm\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:52.162199       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3961/hostexec-ip-172-20-48-93.sa-east-1.compute.internal-7l9w8\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:55.653492       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7177-5053/csi-mockplugin-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:55.941196       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7177-5053/csi-mockplugin-attacher-0\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:57.296446       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639-9910/csi-hostpath-attacher-0\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:57.758713       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639-9910/csi-hostpathplugin-0\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:57.882374       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639-9910/csi-hostpath-provisioner-0\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:58.026056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639-9910/csi-hostpath-resizer-0\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:58.173270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639-9910/csi-hostpath-snapshotter-0\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:49:58.239230       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-1457/downward-api-76bb0a22-aea2-470b-a2cb-8712178a0cd2\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:49:58.451925       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639/inline-volume-tester-vp6d4\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:50:00.133008       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-7996/concurrent-27194150-ll2kv\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:00.147322       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-9117/failed-jobs-history-limit-27194150-clhfx\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:02.451545       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1344/hostexec-ip-172-20-41-171.sa-east-1.compute.internal-5v4c8\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:50:03.030293       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-5639/inline-volume-tester2-cqxv8\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0914 19:50:07.217411       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-7177/pvc-volume-tester-g7gfv\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0914 19:50:07.932339       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-zp4jp\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:07.950614       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-8pszb\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:07.951529       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-q7mwz\" node=\"ip-172-20-50-202.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:07.967095       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-xc9kr\" node=\"ip-172-20-41-171.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:07.970410       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-8vrbf\" node=\"ip-172-20-48-93.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0914 19:50:07.974919       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-7382/webserver-847dcfb7fb-x2m9z\" node=\"ip-172-20-48-74.sa-east-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-38-237.sa-east-1.compute.internal ====\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"19169\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42092\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42103\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42103\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42104\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42110\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"42112\"\n    },\n    \"items\": []\n}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:09.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4647" for this suite.


... skipping 21 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3626
STEP: Creating statefulset with conflicting port in namespace statefulset-3626
STEP: Waiting until pod test-pod will start running in namespace statefulset-3626
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3626
Sep 14 19:49:53.753: INFO: Observed stateful pod in namespace: statefulset-3626, name: ss-0, uid: 162783bc-090c-47f6-9994-3e0d1a98edb7, status phase: Pending. Waiting for statefulset controller to delete.
Sep 14 19:49:53.897: INFO: Observed stateful pod in namespace: statefulset-3626, name: ss-0, uid: 162783bc-090c-47f6-9994-3e0d1a98edb7, status phase: Failed. Waiting for statefulset controller to delete.
Sep 14 19:49:53.897: INFO: Observed stateful pod in namespace: statefulset-3626, name: ss-0, uid: 162783bc-090c-47f6-9994-3e0d1a98edb7, status phase: Failed. Waiting for statefulset controller to delete.
Sep 14 19:49:53.897: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3626
STEP: Removing pod with conflicting port in namespace statefulset-3626
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3626 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Sep 14 19:49:58.479: INFO: Deleting all statefulset in ns statefulset-3626
... skipping 22 lines ...
Sep 14 19:49:42.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep 14 19:49:43.000: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 14 19:49:43.289: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8889" in namespace "provisioning-8889" to be "Succeeded or Failed"
Sep 14 19:49:43.432: INFO: Pod "hostpath-symlink-prep-provisioning-8889": Phase="Pending", Reason="", readiness=false. Elapsed: 143.392666ms
Sep 14 19:49:45.583: INFO: Pod "hostpath-symlink-prep-provisioning-8889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.29416843s
STEP: Saw pod success
Sep 14 19:49:45.583: INFO: Pod "hostpath-symlink-prep-provisioning-8889" satisfied condition "Succeeded or Failed"
Sep 14 19:49:45.583: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8889" in namespace "provisioning-8889"
Sep 14 19:49:45.730: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8889" to be fully deleted
Sep 14 19:49:45.873: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-s2qq
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:49:46.019: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-s2qq" in namespace "provisioning-8889" to be "Succeeded or Failed"
Sep 14 19:49:46.162: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Pending", Reason="", readiness=false. Elapsed: 143.294489ms
Sep 14 19:49:48.305: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28685397s
Sep 14 19:49:50.451: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 4.431965409s
Sep 14 19:49:52.594: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 6.575742862s
Sep 14 19:49:54.740: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 8.721204594s
Sep 14 19:49:56.885: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 10.866415111s
Sep 14 19:49:59.029: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 13.010906302s
Sep 14 19:50:01.179: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 15.160799987s
Sep 14 19:50:03.324: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 17.305790882s
Sep 14 19:50:05.469: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 19.450261086s
Sep 14 19:50:07.613: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Running", Reason="", readiness=true. Elapsed: 21.594244516s
Sep 14 19:50:09.758: INFO: Pod "pod-subpath-test-inlinevolume-s2qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.739193136s
STEP: Saw pod success
Sep 14 19:50:09.758: INFO: Pod "pod-subpath-test-inlinevolume-s2qq" satisfied condition "Succeeded or Failed"
Sep 14 19:50:09.902: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-s2qq container test-container-subpath-inlinevolume-s2qq: <nil>
STEP: delete the pod
Sep 14 19:50:10.205: INFO: Waiting for pod pod-subpath-test-inlinevolume-s2qq to disappear
Sep 14 19:50:10.349: INFO: Pod pod-subpath-test-inlinevolume-s2qq no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-s2qq
Sep 14 19:50:10.349: INFO: Deleting pod "pod-subpath-test-inlinevolume-s2qq" in namespace "provisioning-8889"
STEP: Deleting pod
Sep 14 19:50:10.494: INFO: Deleting pod "pod-subpath-test-inlinevolume-s2qq" in namespace "provisioning-8889"
Sep 14 19:50:10.853: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8889" in namespace "provisioning-8889" to be "Succeeded or Failed"
Sep 14 19:50:10.996: INFO: Pod "hostpath-symlink-prep-provisioning-8889": Phase="Pending", Reason="", readiness=false. Elapsed: 142.955456ms
Sep 14 19:50:13.140: INFO: Pod "hostpath-symlink-prep-provisioning-8889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287279616s
STEP: Saw pod success
Sep 14 19:50:13.140: INFO: Pod "hostpath-symlink-prep-provisioning-8889" satisfied condition "Succeeded or Failed"
Sep 14 19:50:13.140: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8889" in namespace "provisioning-8889"
Sep 14 19:50:13.296: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8889" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:13.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8889" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":40,"skipped":286,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":31,"skipped":249,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:50:09.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-553cafc7-efb8-49ad-b2b6-1302535d21dc
STEP: Creating a pod to test consume secrets
Sep 14 19:50:10.826: INFO: Waiting up to 5m0s for pod "pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827" in namespace "secrets-5752" to be "Succeeded or Failed"
Sep 14 19:50:10.972: INFO: Pod "pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827": Phase="Pending", Reason="", readiness=false. Elapsed: 145.087786ms
Sep 14 19:50:13.117: INFO: Pod "pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289926515s
STEP: Saw pod success
Sep 14 19:50:13.117: INFO: Pod "pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827" satisfied condition "Succeeded or Failed"
Sep 14 19:50:13.260: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:50:13.554: INFO: Waiting for pod pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827 to disappear
Sep 14 19:50:13.698: INFO: Pod pod-secrets-7b3c07f3-8a83-4c87-aa29-69a9cf679827 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:13.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5752" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":249,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:14.003: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 60 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":47,"skipped":329,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:49:49.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:25.501 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":48,"skipped":329,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:14.683: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:14.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4875" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":41,"skipped":289,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:15.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":42,"skipped":291,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:15.299: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 73 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-7164eb24-e2f8-4a4c-bc61-bb5ebb9057fe
STEP: Creating a pod to test consume secrets
Sep 14 19:50:15.067: INFO: Waiting up to 5m0s for pod "pod-secrets-67046edf-1bec-4173-8425-67c19b527504" in namespace "secrets-8315" to be "Succeeded or Failed"
Sep 14 19:50:15.211: INFO: Pod "pod-secrets-67046edf-1bec-4173-8425-67c19b527504": Phase="Pending", Reason="", readiness=false. Elapsed: 143.744558ms
Sep 14 19:50:17.358: INFO: Pod "pod-secrets-67046edf-1bec-4173-8425-67c19b527504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.291119068s
STEP: Saw pod success
Sep 14 19:50:17.358: INFO: Pod "pod-secrets-67046edf-1bec-4173-8425-67c19b527504" satisfied condition "Succeeded or Failed"
Sep 14 19:50:17.502: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-secrets-67046edf-1bec-4173-8425-67c19b527504 container secret-volume-test: <nil>
STEP: delete the pod
Sep 14 19:50:17.809: INFO: Waiting for pod pod-secrets-67046edf-1bec-4173-8425-67c19b527504 to disappear
Sep 14 19:50:17.958: INFO: Pod pod-secrets-67046edf-1bec-4173-8425-67c19b527504 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:17.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8315" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":260,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:18.258: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:50:18.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:21.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9050" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":34,"skipped":263,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:50:21.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
• [SLOW TEST:9.689 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":35,"skipped":263,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:31.322: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":16,"skipped":108,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:31.986: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
Sep 14 19:48:04.889: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 14 19:48:04.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.170.6 80'
Sep 14 19:48:06.388: INFO: stderr: "+ nc -v -t -w 2 100.70.170.6 80\n+ echo hostName\nConnection to 100.70.170.6 80 port [tcp/http] succeeded!\n"
Sep 14 19:48:06.388: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 14 19:48:06.388: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:09.889: INFO: rc: 1
Sep 14 19:48:09.889: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:10.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:14.359: INFO: rc: 1
Sep 14 19:48:14.359: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + echonc hostName
 -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:14.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:18.451: INFO: rc: 1
Sep 14 19:48:18.451: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:18.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:22.430: INFO: rc: 1
Sep 14 19:48:22.430: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.48.74 30261
+ echo hostName
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:22.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:26.349: INFO: rc: 1
Sep 14 19:48:26.349: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:26.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:30.366: INFO: rc: 1
Sep 14 19:48:30.366: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:30.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:34.781: INFO: rc: 1
Sep 14 19:48:34.781: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:34.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:38.412: INFO: rc: 1
Sep 14 19:48:38.412: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:38.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:42.415: INFO: rc: 1
Sep 14 19:48:42.415: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:42.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:46.364: INFO: rc: 1
Sep 14 19:48:46.364: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:46.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:50.352: INFO: rc: 1
Sep 14 19:48:50.352: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:50.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:54.365: INFO: rc: 1
Sep 14 19:48:54.365: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:54.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:48:58.645: INFO: rc: 1
Sep 14 19:48:58.645: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:48:58.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:02.377: INFO: rc: 1
Sep 14 19:49:02.377: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:02.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:06.393: INFO: rc: 1
Sep 14 19:49:06.393: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:06.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:10.546: INFO: rc: 1
Sep 14 19:49:10.546: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:10.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:14.382: INFO: rc: 1
Sep 14 19:49:14.382: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:14.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:18.346: INFO: rc: 1
Sep 14 19:49:18.346: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:18.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:22.345: INFO: rc: 1
Sep 14 19:49:22.345: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:22.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:26.433: INFO: rc: 1
Sep 14 19:49:26.434: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:26.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:30.449: INFO: rc: 1
Sep 14 19:49:30.449: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + nc -v -t -w 2 172.20.48.74 30261
echo hostName
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:30.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:34.346: INFO: rc: 1
Sep 14 19:49:34.346: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:34.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:38.361: INFO: rc: 1
Sep 14 19:49:38.361: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.48.74 30261
+ echo hostName
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:38.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:42.373: INFO: rc: 1
Sep 14 19:49:42.373: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:42.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:46.384: INFO: rc: 1
Sep 14 19:49:46.384: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:46.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:50.367: INFO: rc: 1
Sep 14 19:49:50.367: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.48.74 30261
+ echo hostName
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:50.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:54.358: INFO: rc: 1
Sep 14 19:49:54.358: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + ncecho -v -t hostName -w
 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:54.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:49:58.560: INFO: rc: 1
Sep 14 19:49:58.560: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:58.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:50:02.355: INFO: rc: 1
Sep 14 19:50:02.355: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:02.890: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:50:06.404: INFO: rc: 1
Sep 14 19:50:06.405: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:06.889: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:50:10.583: INFO: rc: 1
Sep 14 19:50:10.583: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ + echonc -v -t hostName -w
 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:10.583: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261'
Sep 14 19:50:14.128: INFO: rc: 1
Sep 14 19:50:14.128: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1814 exec execpod-affinity85cln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.48.74 30261:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.48.74 30261
nc: connect to 172.20.48.74 port 30261 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:14.128: FAIL: Unexpected error:
    <*errors.errorString | 0xc004e962b0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.48.74:30261 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.48.74:30261 over TCP protocol
occurred

... skipping 291 lines ...
• Failure [159.012 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:50:14.128: Unexpected error:
      <*errors.errorString | 0xc004e962b0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.48.74:30261 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.48.74:30261 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572
------------------------------
{"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":58,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 14 19:48:54.480: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 14 19:48:54.625: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:27.256: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-945-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9809.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Sep 14 19:49:57.503: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-945-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9809.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Sep 14 19:50:27.654: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-945-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9809.svc:9443/crdconvert?timeout=30s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 14 19:50:27.654: FAIL: Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 270 lines ...
• Failure [107.244 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:50:27.654: Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":23,"skipped":249,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:36.342: INFO: Only supported for providers [vsphere] (not aws)
... skipping 40 lines ...
• [SLOW TEST:5.875 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":36,"skipped":269,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:37.218: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-9446/configmap-test-4e58754f-1068-4b77-b4f2-bfec1d7aaa89
STEP: Creating a pod to test consume configMaps
Sep 14 19:50:33.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7" in namespace "configmap-9446" to be "Succeeded or Failed"
Sep 14 19:50:33.215: INFO: Pod "pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.270544ms
Sep 14 19:50:35.359: INFO: Pod "pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2875813s
Sep 14 19:50:37.504: INFO: Pod "pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431847037s
STEP: Saw pod success
Sep 14 19:50:37.504: INFO: Pod "pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7" satisfied condition "Succeeded or Failed"
Sep 14 19:50:37.647: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7 container env-test: <nil>
STEP: delete the pod
Sep 14 19:50:37.945: INFO: Waiting for pod pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7 to disappear
Sep 14 19:50:38.094: INFO: Pod pod-configmaps-1885cd81-b403-453a-b42c-58b391b04de7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.322 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":121,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:38.420: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":422,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:49:20.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Sep 14 19:50:31.425: INFO: boom-server pod logs: 2021/09/14 19:49:25 external ip: 100.96.1.16
2021/09/14 19:49:25 listen on 0.0.0.0:9000
2021/09/14 19:49:25 probing 100.96.1.16

Sep 14 19:50:31.425: FAIL: Boom server pod did not sent any bad packet to the client

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002ab6780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002ab6780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 268 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282

  Sep 14 19:50:31.425: Boom server pod did not sent any bad packet to the client

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":48,"skipped":422,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:38.700: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":40,"skipped":311,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:50:10.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
Sep 14 19:50:08.345: INFO: PersistentVolumeClaim pvc-zc92h found but phase is Pending instead of Bound.
Sep 14 19:50:10.489: INFO: PersistentVolumeClaim pvc-zc92h found and phase=Bound (4.432313652s)
Sep 14 19:50:10.489: INFO: Waiting up to 3m0s for PersistentVolume local-hcng5 to have phase Bound
Sep 14 19:50:10.678: INFO: PersistentVolume local-hcng5 found and phase=Bound (188.308401ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k6mw
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:50:11.113: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k6mw" in namespace "provisioning-1344" to be "Succeeded or Failed"
Sep 14 19:50:11.261: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Pending", Reason="", readiness=false. Elapsed: 148.154176ms
Sep 14 19:50:13.404: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291513804s
Sep 14 19:50:15.549: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 4.435739977s
Sep 14 19:50:17.692: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 6.579283499s
Sep 14 19:50:19.837: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 8.724421928s
Sep 14 19:50:21.981: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 10.867702202s
... skipping 2 lines ...
Sep 14 19:50:28.457: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 17.344241491s
Sep 14 19:50:30.601: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 19.488147107s
Sep 14 19:50:32.745: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 21.63202843s
Sep 14 19:50:34.889: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Running", Reason="", readiness=true. Elapsed: 23.775907363s
Sep 14 19:50:37.032: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.919205507s
STEP: Saw pod success
Sep 14 19:50:37.032: INFO: Pod "pod-subpath-test-preprovisionedpv-k6mw" satisfied condition "Succeeded or Failed"
Sep 14 19:50:37.175: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-k6mw container test-container-subpath-preprovisionedpv-k6mw: <nil>
STEP: delete the pod
Sep 14 19:50:37.474: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k6mw to disappear
Sep 14 19:50:37.616: INFO: Pod pod-subpath-test-preprovisionedpv-k6mw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k6mw
Sep 14 19:50:37.616: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k6mw" in namespace "provisioning-1344"
... skipping 45 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":44,"skipped":247,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:40.594: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:41.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-9693" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":45,"skipped":250,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:41.947: INFO: Driver local doesn't support ext3 -- skipping
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:41.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8045" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":49,"skipped":443,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:42.109: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7864" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":46,"skipped":258,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:43.415: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 14 19:50:41.498: INFO: Successfully updated pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe"
Sep 14 19:50:41.499: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe" in namespace "pods-3616" to be "terminated due to deadline exceeded"
Sep 14 19:50:41.642: INFO: Pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe": Phase="Running", Reason="", readiness=true. Elapsed: 143.121375ms
Sep 14 19:50:43.786: INFO: Pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe": Phase="Running", Reason="", readiness=true. Elapsed: 2.28730196s
Sep 14 19:50:45.931: INFO: Pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.432246831s
Sep 14 19:50:45.931: INFO: Pod "pod-update-activedeadlineseconds-95674b64-12c3-41cd-845e-13b55ae6cbfe" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:45.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3616" for this suite.


• [SLOW TEST:8.959 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":276,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:50:46.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
STEP: Destroying namespace "services-4686" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":38,"skipped":276,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:49.157: INFO: Only supported for providers [azure] (not aws)
... skipping 32 lines ...
Sep 14 19:50:03.644: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8672s7p7z
STEP: creating a claim
Sep 14 19:50:03.788: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-j69s
STEP: Creating a pod to test atomic-volume-subpath
Sep 14 19:50:04.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-j69s" in namespace "provisioning-8672" to be "Succeeded or Failed"
Sep 14 19:50:04.365: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 142.936884ms
Sep 14 19:50:06.514: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291773846s
Sep 14 19:50:08.658: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4359307s
Sep 14 19:50:10.804: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581635683s
Sep 14 19:50:12.947: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724907778s
Sep 14 19:50:15.091: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868495676s
... skipping 9 lines ...
Sep 14 19:50:36.557: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Running", Reason="", readiness=true. Elapsed: 32.334248206s
Sep 14 19:50:38.700: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Running", Reason="", readiness=true. Elapsed: 34.477629028s
Sep 14 19:50:40.853: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Running", Reason="", readiness=true. Elapsed: 36.630882981s
Sep 14 19:50:43.008: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Running", Reason="", readiness=true. Elapsed: 38.785083181s
Sep 14 19:50:45.151: INFO: Pod "pod-subpath-test-dynamicpv-j69s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.928275992s
STEP: Saw pod success
Sep 14 19:50:45.151: INFO: Pod "pod-subpath-test-dynamicpv-j69s" satisfied condition "Succeeded or Failed"
Sep 14 19:50:45.294: INFO: Trying to get logs from node ip-172-20-48-93.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-j69s container test-container-subpath-dynamicpv-j69s: <nil>
STEP: delete the pod
Sep 14 19:50:45.597: INFO: Waiting for pod pod-subpath-test-dynamicpv-j69s to disappear
Sep 14 19:50:45.741: INFO: Pod pod-subpath-test-dynamicpv-j69s no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-j69s
Sep 14 19:50:45.741: INFO: Deleting pod "pod-subpath-test-dynamicpv-j69s" in namespace "provisioning-8672"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":26,"skipped":279,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:57.334: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:50:58.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-9579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":27,"skipped":286,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:50:59.298: INFO: Only supported for providers [azure] (not aws)
... skipping 41 lines ...
Sep 14 19:50:53.868: INFO: PersistentVolumeClaim pvc-n4vtv found but phase is Pending instead of Bound.
Sep 14 19:50:56.012: INFO: PersistentVolumeClaim pvc-n4vtv found and phase=Bound (13.006418968s)
Sep 14 19:50:56.012: INFO: Waiting up to 3m0s for PersistentVolume local-vfsh8 to have phase Bound
Sep 14 19:50:56.155: INFO: PersistentVolume local-vfsh8 found and phase=Bound (142.742647ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pwvx
STEP: Creating a pod to test exec-volume-test
Sep 14 19:50:56.586: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pwvx" in namespace "volume-774" to be "Succeeded or Failed"
Sep 14 19:50:56.730: INFO: Pod "exec-volume-test-preprovisionedpv-pwvx": Phase="Pending", Reason="", readiness=false. Elapsed: 143.605854ms
Sep 14 19:50:58.874: INFO: Pod "exec-volume-test-preprovisionedpv-pwvx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288052391s
STEP: Saw pod success
Sep 14 19:50:58.874: INFO: Pod "exec-volume-test-preprovisionedpv-pwvx" satisfied condition "Succeeded or Failed"
Sep 14 19:50:59.018: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-pwvx container exec-container-preprovisionedpv-pwvx: <nil>
STEP: delete the pod
Sep 14 19:50:59.309: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pwvx to disappear
Sep 14 19:50:59.459: INFO: Pod exec-volume-test-preprovisionedpv-pwvx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pwvx
Sep 14 19:50:59.459: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pwvx" in namespace "volume-774"
... skipping 123 lines ...
• [SLOW TEST:54.338 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:130
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":36,"skipped":195,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:01.458: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 268 lines ...
• [SLOW TEST:15.625 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":39,"skipped":284,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:04.819: INFO: Only supported for providers [vsphere] (not aws)
... skipping 96 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":41,"skipped":317,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:07.673: INFO: Only supported for providers [azure] (not aws)
... skipping 131 lines ...
STEP: Creating a kubernetes client
Sep 14 19:51:07.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 14 19:51:08.526: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:51:11.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2430" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":42,"skipped":347,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Sep 14 19:50:37.072: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8459cht7x
STEP: creating a claim
Sep 14 19:50:37.217: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-lhqt
STEP: Creating a pod to test subpath
Sep 14 19:50:37.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lhqt" in namespace "provisioning-8459" to be "Succeeded or Failed"
Sep 14 19:50:37.801: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 147.075653ms
Sep 14 19:50:39.945: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290987317s
Sep 14 19:50:42.089: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434878266s
Sep 14 19:50:44.232: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578754912s
Sep 14 19:50:46.380: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725930797s
Sep 14 19:50:48.525: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870925012s
Sep 14 19:50:50.670: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.015951043s
Sep 14 19:50:52.814: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Pending", Reason="", readiness=false. Elapsed: 15.160714855s
Sep 14 19:50:54.958: INFO: Pod "pod-subpath-test-dynamicpv-lhqt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.304767213s
STEP: Saw pod success
Sep 14 19:50:54.958: INFO: Pod "pod-subpath-test-dynamicpv-lhqt" satisfied condition "Succeeded or Failed"
Sep 14 19:50:55.102: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-lhqt container test-container-subpath-dynamicpv-lhqt: <nil>
STEP: delete the pod
Sep 14 19:50:55.395: INFO: Waiting for pod pod-subpath-test-dynamicpv-lhqt to disappear
Sep 14 19:50:55.538: INFO: Pod pod-subpath-test-dynamicpv-lhqt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-lhqt
Sep 14 19:50:55.538: INFO: Deleting pod "pod-subpath-test-dynamicpv-lhqt" in namespace "provisioning-8459"
... skipping 38 lines ...
Sep 14 19:50:35.002: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9495jth8q
STEP: creating a claim
Sep 14 19:50:35.146: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-768d
STEP: Creating a pod to test subpath
Sep 14 19:50:35.582: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-768d" in namespace "provisioning-9495" to be "Succeeded or Failed"
Sep 14 19:50:35.726: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.260027ms
Sep 14 19:50:37.873: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29085331s
Sep 14 19:50:40.032: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449188934s
Sep 14 19:50:42.176: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593098437s
Sep 14 19:50:44.320: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.737742507s
Sep 14 19:50:46.465: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882145435s
Sep 14 19:50:48.609: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.026148132s
Sep 14 19:50:50.753: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.170917255s
Sep 14 19:50:52.897: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.314775146s
Sep 14 19:50:55.042: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.459239062s
Sep 14 19:50:57.185: INFO: Pod "pod-subpath-test-dynamicpv-768d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.602943428s
STEP: Saw pod success
Sep 14 19:50:57.185: INFO: Pod "pod-subpath-test-dynamicpv-768d" satisfied condition "Succeeded or Failed"
Sep 14 19:50:57.329: INFO: Trying to get logs from node ip-172-20-50-202.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-768d container test-container-volume-dynamicpv-768d: <nil>
STEP: delete the pod
Sep 14 19:50:57.641: INFO: Waiting for pod pod-subpath-test-dynamicpv-768d to disappear
Sep 14 19:50:57.793: INFO: Pod pod-subpath-test-dynamicpv-768d no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-768d
Sep 14 19:50:57.793: INFO: Deleting pod "pod-subpath-test-dynamicpv-768d" in namespace "provisioning-9495"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":60,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:14.574: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Sep 14 19:51:11.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 14 19:51:12.561: INFO: Waiting up to 5m0s for pod "security-context-99d3ede4-db41-412e-956d-304c50179475" in namespace "security-context-5884" to be "Succeeded or Failed"
Sep 14 19:51:12.704: INFO: Pod "security-context-99d3ede4-db41-412e-956d-304c50179475": Phase="Pending", Reason="", readiness=false. Elapsed: 143.514512ms
Sep 14 19:51:14.848: INFO: Pod "security-context-99d3ede4-db41-412e-956d-304c50179475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287699058s
STEP: Saw pod success
Sep 14 19:51:14.848: INFO: Pod "security-context-99d3ede4-db41-412e-956d-304c50179475" satisfied condition "Succeeded or Failed"
Sep 14 19:51:14.992: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod security-context-99d3ede4-db41-412e-956d-304c50179475 container test-container: <nil>
STEP: delete the pod
Sep 14 19:51:15.286: INFO: Waiting for pod security-context-99d3ede4-db41-412e-956d-304c50179475 to disappear
Sep 14 19:51:15.430: INFO: Pod security-context-99d3ede4-db41-412e-956d-304c50179475 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:51:15.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5884" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":43,"skipped":350,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:15.739: INFO: Only supported for providers [gce gke] (not aws)
... skipping 59 lines ...
• [SLOW TEST:6.170 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":6,"skipped":72,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:20.769: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-7233
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7233
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Sep 14 19:51:20.154: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc000236240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 21 lines ...
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:35 +0000 UTC - event for server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Started: Started container agnhost-container
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:37 +0000 UTC - event for tester: {default-scheduler } Scheduled: Successfully assigned prestop-7233/tester to ip-172-20-48-74.sa-east-1.compute.internal
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:37 +0000 UTC - event for tester: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:37 +0000 UTC - event for tester: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Created: Created container tester
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:38 +0000 UTC - event for tester: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Started: Started container tester
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:49:40 +0000 UTC - event for tester: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} Killing: Stopping container tester
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:50:12 +0000 UTC - event for tester: {kubelet ip-172-20-48-74.sa-east-1.compute.internal} FailedPreStopHook: Exec lifecycle hook ([wget -O- --post-data={"Source": "prestop"} http://100.96.2.114:8080/write]) for Container "tester" in Pod "tester_prestop-7233(d0a7b651-eb56-4477-ac64-65c6e4ab5b10)" failed - error: command 'wget -O- --post-data={"Source": "prestop"} http://100.96.2.114:8080/write' exited with 137: Connecting to 100.96.2.114:8080 (100.96.2.114:8080)
, message: "Connecting to 100.96.2.114:8080 (100.96.2.114:8080)\n"
Sep 14 19:51:20.446: INFO: At 2021-09-14 19:51:20 +0000 UTC - event for server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Killing: Stopping container agnhost-container
Sep 14 19:51:20.589: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 14 19:51:20.589: INFO: 
Sep 14 19:51:20.735: INFO: 
Logging node info for node ip-172-20-38-237.sa-east-1.compute.internal
... skipping 262 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:51:20.154: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc000236240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 89 lines ...
Sep 14 19:46:28.606: INFO: stderr: ""
Sep 14 19:46:28.606: INFO: stdout: "true"
Sep 14 19:46:28.606: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:46:29.140: INFO: stderr: ""
Sep 14 19:46:29.140: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:46:29.140: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:46:59.284: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:47:04.287: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:47:04.828: INFO: stderr: ""
Sep 14 19:47:04.828: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:47:04.828: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:47:05.349: INFO: stderr: ""
Sep 14 19:47:05.349: INFO: stdout: "true"
Sep 14 19:47:05.349: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:47:05.883: INFO: stderr: ""
Sep 14 19:47:05.883: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:47:05.883: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:47:36.027: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:47:41.028: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:47:41.690: INFO: stderr: ""
Sep 14 19:47:41.690: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:47:41.690: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:47:42.206: INFO: stderr: ""
Sep 14 19:47:42.206: INFO: stdout: "true"
Sep 14 19:47:42.206: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:47:42.718: INFO: stderr: ""
Sep 14 19:47:42.718: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:47:42.718: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:48:12.862: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:48:17.863: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:48:18.382: INFO: stderr: ""
Sep 14 19:48:18.382: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:48:18.382: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:48:18.899: INFO: stderr: ""
Sep 14 19:48:18.899: INFO: stdout: "true"
Sep 14 19:48:18.899: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:48:19.428: INFO: stderr: ""
Sep 14 19:48:19.428: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:48:19.428: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:48:49.573: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:48:54.576: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:48:55.116: INFO: stderr: ""
Sep 14 19:48:55.116: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:48:55.116: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:48:55.641: INFO: stderr: ""
Sep 14 19:48:55.641: INFO: stdout: "true"
Sep 14 19:48:55.641: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:48:56.180: INFO: stderr: ""
Sep 14 19:48:56.180: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:48:56.180: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:49:26.333: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:49:31.334: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:49:31.997: INFO: stderr: ""
Sep 14 19:49:31.998: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:49:31.998: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:49:32.514: INFO: stderr: ""
Sep 14 19:49:32.514: INFO: stdout: "true"
Sep 14 19:49:32.514: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:49:33.032: INFO: stderr: ""
Sep 14 19:49:33.032: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:49:33.032: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:50:03.176: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:50:08.178: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:50:08.839: INFO: stderr: ""
Sep 14 19:50:08.839: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:50:08.839: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:50:09.358: INFO: stderr: ""
Sep 14 19:50:09.358: INFO: stdout: "true"
Sep 14 19:50:09.358: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:50:09.877: INFO: stderr: ""
Sep 14 19:50:09.877: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:50:09.877: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:50:40.022: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:50:45.022: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Sep 14 19:50:45.684: INFO: stderr: ""
Sep 14 19:50:45.684: INFO: stdout: "update-demo-nautilus-ffnhr update-demo-nautilus-zd67z "
Sep 14 19:50:45.684: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Sep 14 19:50:46.200: INFO: stderr: ""
Sep 14 19:50:46.200: INFO: stdout: "true"
Sep 14 19:50:46.200: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-564 get pods update-demo-nautilus-ffnhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Sep 14 19:50:46.720: INFO: stderr: ""
Sep 14 19:50:46.720: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Sep 14 19:50:46.720: INFO: validating pod update-demo-nautilus-ffnhr
Sep 14 19:51:16.864: INFO: update-demo-nautilus-ffnhr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ffnhr)
Sep 14 19:51:21.864: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311 +0x29b
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003ab0000)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 298 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 14 19:51:21.864: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":29,"skipped":159,"failed":2,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:30.274: INFO: Only supported for providers [azure] (not aws)
... skipping 108 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":20,"skipped":113,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:51:26.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 14 19:51:27.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5" in namespace "downward-api-1292" to be "Succeeded or Failed"
Sep 14 19:51:27.782: INFO: Pod "downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 142.960838ms
Sep 14 19:51:29.926: INFO: Pod "downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28723764s
Sep 14 19:51:32.071: INFO: Pod "downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43174617s
STEP: Saw pod success
Sep 14 19:51:32.071: INFO: Pod "downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5" satisfied condition "Succeeded or Failed"
Sep 14 19:51:32.214: INFO: Trying to get logs from node ip-172-20-41-171.sa-east-1.compute.internal pod downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5 container client-container: <nil>
STEP: delete the pod
Sep 14 19:51:32.506: INFO: Waiting for pod downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5 to disappear
Sep 14 19:51:32.649: INFO: Pod downwardapi-volume-715f3f1f-4666-4f1c-96d9-c0397c5b3cd5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.161 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":113,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 14 19:51:32.959: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
• [SLOW TEST:6.378 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":28,"skipped":289,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
Sep 14 19:51:34.099: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Sep 14 19:51:29.696: INFO: start=2021-09-14 19:51:24.509558818 +0000 UTC m=+1653.702849738, now=2021-09-14 19:51:29.696481699 +0000 UTC m=+1658.889772609, kubelet pod: {"metadata":{"name":"pod-submit-remove-d9ffee52-9469-44bc-a833-80af87b12975","namespace":"pods-4198","uid":"90b8e3f7-a081-4826-b12e-6d56a7952fdb","resourceVersion":"44466","creationTimestamp":"2021-09-14T19:51:21Z","deletionTimestamp":"2021-09-14T19:51:54Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"500613165"},"annotations":{"kubernetes.io/config.seen":"2021-09-14T19:51:21.725209350Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-09-14T19:51:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-66vmv","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-66vmv","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-50-202.sa-east-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:21Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:21Z"}],"hostIP":"172.20.50.202","podIP":"100.96.2.156","podIPs":[{"ip":"100.96.2.156"}],"startTime":"2021-09-14T19:51:21Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-09-14T19:51:22Z","finishedAt":"2021-09-14T19:51:24Z","containerID":"containerd://5d6986b6f82fdc245d31d383909287f0c479f2300656a28605f907cf72a6c521"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://5d6986b6f82fdc245d31d383909287f0c479f2300656a28605f907cf72a6c521","started":false}],"qosClass":"BestEffort"}}
Sep 14 19:51:34.662: INFO: start=2021-09-14 19:51:24.509558818 +0000 UTC m=+1653.702849738, now=2021-09-14 19:51:34.662157036 +0000 UTC m=+1663.855447970, kubelet pod: {"metadata":{"name":"pod-submit-remove-d9ffee52-9469-44bc-a833-80af87b12975","namespace":"pods-4198","uid":"90b8e3f7-a081-4826-b12e-6d56a7952fdb","resourceVersion":"44466","creationTimestamp":"2021-09-14T19:51:21Z","deletionTimestamp":"2021-09-14T19:51:54Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"500613165"},"annotations":{"kubernetes.io/config.seen":"2021-09-14T19:51:21.725209350Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-09-14T19:51:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-66vmv","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-66vmv","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-50-202.sa-east-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:21Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:25Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-14T19:51:21Z"}],"hostIP":"172.20.50.202","podIP":"100.96.2.156","podIPs":[{"ip":"100.96.2.156"}],"startTime":"2021-09-14T19:51:21Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-09-14T19:51:22Z","finishedAt":"2021-09-14T19:51:24Z","containerID":"containerd://5d6986b6f82fdc245d31d383909287f0c479f2300656a28605f907cf72a6c521"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://5d6986b6f82fdc245d31d383909287f0c479f2300656a28605f907cf72a6c521","started":false}],"qosClass":"BestEffort"}}
Sep 14 19:51:39.665: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:51:39.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4198" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":7,"skipped":74,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
Sep 14 19:51:40.106: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 271 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  37s   default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     37s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    37s   kubelet            Created container webserver
  Normal  Started    36s   kubelet            Started container webserver

Sep 14 19:32:14.500: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.1.114&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Sep 14 19:32:14.500: INFO: ...failed...will try again in next pass
Sep 14 19:32:14.500: INFO: Breadth first check of 100.96.3.122 on host 172.20.48.74...
Sep 14 19:32:14.644: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.3.122&port=8080&tries=1'] Namespace:pod-network-test-6068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:32:14.644: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:32:20.754: INFO: Waiting for responses: map[netserver-1:{}]
Sep 14 19:32:22.754: INFO: 
Output of kubectl describe pod pod-network-test-6068/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  49s   default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     49s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    49s   kubelet            Created container webserver
  Normal  Started    48s   kubelet            Started container webserver

Sep 14 19:32:26.070: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.3.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 14 19:32:26.070: INFO: ...failed...will try again in next pass
Sep 14 19:32:26.070: INFO: Breadth first check of 100.96.4.119 on host 172.20.48.93...
Sep 14 19:32:26.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.4.119&port=8080&tries=1'] Namespace:pod-network-test-6068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:32:26.214: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:32:32.198: INFO: Waiting for responses: map[netserver-2:{}]
Sep 14 19:32:34.198: INFO: 
Output of kubectl describe pod pod-network-test-6068/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  60s   default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     60s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    60s   kubelet            Created container webserver
  Normal  Started    59s   kubelet            Started container webserver

Sep 14 19:32:37.514: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.4.119&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 14 19:32:37.514: INFO: ...failed...will try again in next pass
Sep 14 19:32:37.514: INFO: Breadth first check of 100.96.2.139 on host 172.20.50.202...
Sep 14 19:32:37.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.2.139&port=8080&tries=1'] Namespace:pod-network-test-6068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:32:37.659: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:32:38.632: INFO: Waiting for responses: map[]
Sep 14 19:32:38.632: INFO: reached 100.96.2.139 after 0/1 tries
Sep 14 19:32:38.632: INFO: Going to retry 3 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m21s  default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     7m21s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    7m21s  kubelet            Created container webserver
  Normal  Started    7m20s  kubelet            Started container webserver

Sep 14 19:38:58.180: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.4.119&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 14 19:38:58.180: INFO: ... Done probing pod [[[ 100.96.4.119 ]]]
Sep 14 19:38:58.180: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    13m   kubelet            Created container webserver
  Normal  Started    13m   kubelet            Started container webserver

Sep 14 19:45:16.079: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.1.114&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Sep 14 19:45:16.079: INFO: ... Done probing pod [[[ 100.96.1.114 ]]]
Sep 14 19:45:16.079: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned pod-network-test-6068/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     19m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    19m   kubelet            Created container webserver
  Normal  Started    19m   kubelet            Started container webserver

Sep 14 19:51:34.408: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.3.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 14 19:51:34.408: INFO: ... Done probing pod [[[ 100.96.3.122 ]]]
Sep 14 19:51:34.408: INFO: succeeded at polling 1 out of 4 connections
Sep 14 19:51:34.408: INFO: pod polling failure summary:
Sep 14 19:51:34.408: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.4.119&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Sep 14 19:51:34.408: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.1.114&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Sep 14 19:51:34.408: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.144:9080/dial?request=hostname&protocol=http&host=100.96.3.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Sep 14 19:51:34.408: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001ed2480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 261 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 14 19:51:34.408: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":46,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Sep 14 19:51:40.641: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":49,"skipped":335,"failed":1,"failures":["[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
Sep 14 19:51:42.131: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":23,"skipped":197,"failed":4,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] DNS should support configurable pod resolv.conf"]}
Sep 14 19:51:46.459: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:243.919 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":146,"failed":2,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}
Sep 14 19:51:46.878: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 14 19:50:46.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245845, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245845, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245845, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245845, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 14 19:50:49.380: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Sep 14 19:50:59.955: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:10.343: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:20.645: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:30.944: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:41.233: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:41.233: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001c4250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 422 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• Failure [70.583 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:51:41.233: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001c4250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":46,"skipped":259,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
Sep 14 19:51:54.019: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":30,"skipped":164,"failed":2,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}
Sep 14 19:52:00.053: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
Sep 14 19:51:10.828: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2611
Sep 14 19:51:10.972: INFO: creating *v1.StatefulSet: csi-mock-volumes-2611-9567/csi-mockplugin-attacher
Sep 14 19:51:11.118: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2611"
Sep 14 19:51:11.262: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2611 to register on node ip-172-20-48-93.sa-east-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Sep 14 19:51:16.336: INFO: Error getting logs for pod inline-volume-cd45n: the server rejected our request for an unknown reason (get pods inline-volume-cd45n)
Sep 14 19:51:16.480: INFO: Deleting pod "inline-volume-cd45n" in namespace "csi-mock-volumes-2611"
Sep 14 19:51:16.627: INFO: Wait up to 5m0s for pod "inline-volume-cd45n" to be fully deleted
STEP: Deleting the previously created pod
Sep 14 19:51:28.914: INFO: Deleting pod "pvc-volume-tester-nvbx5" in namespace "csi-mock-volumes-2611"
Sep 14 19:51:29.060: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nvbx5" to be fully deleted
STEP: Checking CSI driver logs
Sep 14 19:51:37.494: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-nvbx5
Sep 14 19:51:37.494: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2611
Sep 14 19:51:37.494: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 06669632-3b00-481e-99a8-2131598a60e9
Sep 14 19:51:37.494: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 14 19:51:37.494: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Sep 14 19:51:37.494: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-9e9a5135df3106b66eb0df552c5a3b8ed6c2955e3d57b3f27db702990e5ef645","target_path":"/var/lib/kubelet/pods/06669632-3b00-481e-99a8-2131598a60e9/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-nvbx5
Sep 14 19:51:37.494: INFO: Deleting pod "pvc-volume-tester-nvbx5" in namespace "csi-mock-volumes-2611"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-2611
STEP: Waiting for namespaces [csi-mock-volumes-2611] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":40,"skipped":300,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Sep 14 19:52:01.002: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:246.003 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":28,"skipped":193,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be rejected when no endpoints exist"]}
Sep 14 19:52:02.581: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 221 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should provide basic identity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:126
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":17,"skipped":141,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should be able to up and down services"]}
Sep 14 19:52:06.335: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: creating replication controller affinity-clusterip-transition in namespace services-6214
I0914 19:49:22.943201    4713 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6214, replica count: 3
I0914 19:49:26.094713    4713 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 14 19:49:26.382: INFO: Creating new exec pod
Sep 14 19:49:29.819: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:49:36.297: INFO: rc: 1
Sep 14 19:49:36.297: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:37.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:49:43.859: INFO: rc: 1
Sep 14 19:49:43.859: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:44.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:49:50.825: INFO: rc: 1
Sep 14 19:49:50.825: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ + ncecho -v hostName
 -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:51.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:49:57.924: INFO: rc: 1
Sep 14 19:49:57.924: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:49:58.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:04.786: INFO: rc: 1
Sep 14 19:50:04.787: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:05.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:11.819: INFO: rc: 1
Sep 14 19:50:11.819: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:12.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:18.936: INFO: rc: 1
Sep 14 19:50:18.936: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:19.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:25.826: INFO: rc: 1
Sep 14 19:50:25.826: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:26.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:32.902: INFO: rc: 1
Sep 14 19:50:32.902: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-clusterip-transition 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:33.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:39.831: INFO: rc: 1
Sep 14 19:50:39.831: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:40.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:46.793: INFO: rc: 1
Sep 14 19:50:46.793: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:47.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:50:53.795: INFO: rc: 1
Sep 14 19:50:53.795: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:50:54.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:00.763: INFO: rc: 1
Sep 14 19:51:00.763: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-clusterip-transition 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:01.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:07.782: INFO: rc: 1
Sep 14 19:51:07.782: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:08.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:14.761: INFO: rc: 1
Sep 14 19:51:14.761: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:15.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:21.791: INFO: rc: 1
Sep 14 19:51:21.791: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-clusterip-transition 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:22.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:28.908: INFO: rc: 1
Sep 14 19:51:28.908: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:29.297: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:35.762: INFO: rc: 1
Sep 14 19:51:35.762: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:36.298: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:42.955: INFO: rc: 1
Sep 14 19:51:42.955: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:42.955: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 14 19:51:50.324: INFO: rc: 1
Sep 14 19:51:50.324: INFO: Service reachability failing with error: error running /tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6214 exec execpod-affinityn2s7j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 14 19:51:50.324: FAIL: Unexpected error:
    <*errors.errorString | 0xc002bf0220>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
occurred

... skipping 207 lines ...
• Failure [170.179 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:51:50.324: Unexpected error:
      <*errors.errorString | 0xc002bf0220>: {
          s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572
------------------------------
{"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":230,"failed":3,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}
Sep 14 19:52:12.118: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":18,"skipped":132,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:51:01.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
STEP: Creating a validating webhook configuration
Sep 14 19:51:17.416: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:27.806: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:38.106: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:48.405: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:58.694: INFO: Waiting for webhook configuration to be ready...
Sep 14 19:51:58.694: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 344 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:51:58.694: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":18,"skipped":132,"failed":4,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
Sep 14 19:52:12.301: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:44.487 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":37,"skipped":248,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
Sep 14 19:52:16.878: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
Sep 14 19:51:24.853: INFO: PersistentVolumeClaim pvc-q99c4 found and phase=Bound (143.572638ms)
STEP: Deleting the previously created pod
Sep 14 19:51:44.575: INFO: Deleting pod "pvc-volume-tester-pqzkl" in namespace "csi-mock-volumes-8045"
Sep 14 19:51:44.720: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pqzkl" to be fully deleted
STEP: Checking CSI driver logs
Sep 14 19:51:57.153: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6InZTbFE0V19IZ20xbUxSQmEyaHFKVTJUdGhyUVc1R3hRdm1sMlc1bTF3VjQifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MzE2NDk3MDEsImlhdCI6MTYzMTY0OTEwMSwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWM0Y2UzNjQ4MzEtNjI2OTEudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtODA0NSIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXItcHF6a2wiLCJ1aWQiOiJlZDM1NzAzMi1iZTliLTQzMzYtOTExZC1jYTg4MTQzNTE4ZjYifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiJiOGI2ZjI5YS05ZjFmLTRiN2MtOWFhYi1mMWZjNWJmNGVjOGQifX0sIm5iZiI6MTYzMTY0OTEwMSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtODA0NTpkZWZhdWx0In0.PJ6WQrQRflxawgPj6dleW0qUb6KAnL-zMOkM4eHIYwJHNqjG771Kh6X3zhQHQcMlT99GmlhRVFpcHkZE3QTMhiKlJiRw3rEC9IgMgOicUa9mlTmWKRsy2G0z7C8pWlemN67gNornzq4FZlHsZbcIPT1MZ6YOPM-0Deiq0YlhfzKlj2VIT_8bUjuMF85obgCzEctMjCI1WePYpE7kQ4hjO7XWYejWBNO5ywTRMLnJYxZYLeNJetDaSMPtXoG_IxCVqRH8WDVC9cA33OA3KX9O-gYTp1UVkotA9QmGIhunPNP-GbBB6xCTo5CUAsYGgmI2oc2AP5SYYYN7MegEyRbFZQ","expirationTimestamp":"2021-09-14T20:01:41Z"}}
Sep 14 19:51:57.154: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ed357032-be9b-4336-911d-ca88143518f6/volumes/kubernetes.io~csi/pvc-3e5a1c63-7cbc-4803-9094-7921d58a8a4d/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-pqzkl
Sep 14 19:51:57.154: INFO: Deleting pod "pvc-volume-tester-pqzkl" in namespace "csi-mock-volumes-8045"
STEP: Deleting claim pvc-q99c4
Sep 14 19:51:57.586: INFO: Waiting up to 2m0s for PersistentVolume pvc-3e5a1c63-7cbc-4803-9094-7921d58a8a4d to get deleted
Sep 14 19:51:57.730: INFO: PersistentVolume pvc-3e5a1c63-7cbc-4803-9094-7921d58a8a4d was removed
STEP: Deleting storageclass csi-mock-volumes-8045-scscc8z
... skipping 75 lines ...
Sep 14 19:49:10.074: INFO: The status of Pod netserver-3 is Running (Ready = true)
STEP: Creating test pods
Sep 14 19:49:13.232: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4
Sep 14 19:49:13.232: INFO: Going to poll 100.96.1.11 on port 8081 at least 0 times, with a maximum of 46 tries before failing
Sep 14 19:49:13.375: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:13.375: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:15.367: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:15.367: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:17.514: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:17.514: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:19.451: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:19.451: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:21.595: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:21.595: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:23.577: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:23.577: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:25.722: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:25.722: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:27.671: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:27.671: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:29.815: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:29.815: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:31.756: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:31.756: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:33.900: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:33.900: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:35.909: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:35.909: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:38.053: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:38.053: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:40.010: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:40.010: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:42.155: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:42.155: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:44.130: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:44.130: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:46.274: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:46.274: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:48.238: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:48.238: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:50.382: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:50.382: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:52.367: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:52.367: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:54.511: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:54.511: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:56.469: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:56.469: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:58.615: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:58.615: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:00.641: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:00.641: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:02.786: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:02.786: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:04.820: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:04.820: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:06.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:06.970: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:08.961: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:08.961: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:11.105: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:11.105: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:13.081: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:13.081: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:15.226: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:15.226: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:17.294: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:17.295: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:19.442: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:19.443: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:21.514: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:21.514: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:23.659: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:23.659: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:25.599: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:25.599: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:27.744: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:27.744: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:29.734: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:29.734: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:31.878: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:31.878: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:34.286: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:34.287: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:36.431: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:36.431: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:38.408: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:38.408: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:40.552: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:40.552: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:42.551: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:42.551: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:44.696: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:44.696: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:46.914: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:46.914: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:49.058: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:49.059: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:51.038: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:51.038: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:53.184: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:53.184: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:55.143: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:55.143: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:57.288: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:57.288: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:59.464: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:59.464: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:01.608: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:01.609: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:03.576: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:03.576: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:05.720: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:05.720: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:07.688: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:07.688: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:09.833: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:09.833: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:11.897: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:11.897: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:14.042: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:14.042: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:15.995: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:15.995: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:18.144: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:18.144: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:20.171: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:20.172: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:22.316: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:22.316: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:24.297: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:24.297: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:26.442: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:26.442: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:28.381: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:28.381: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:30.526: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:30.526: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:32.472: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:32.472: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:34.616: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:34.616: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:36.619: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:36.619: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:38.763: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:38.763: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:40.878: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:40.878: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:43.022: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:43.022: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:44.965: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:44.965: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:47.111: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:47.111: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:49.102: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:49.102: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:51.247: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:51.247: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:53.202: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:53.202: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:55.347: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:55.347: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:57.306: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:57.306: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:59.451: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:59.451: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:01.595: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:01.595: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:03.739: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:03.739: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:05.717: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:05.717: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:07.861: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:07.861: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:09.864: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:09.864: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:12.011: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:12.011: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:13.989: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:13.989: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:16.134: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:16.134: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:18.136: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:18.136: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:20.281: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8990 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:20.281: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:22.286: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.1.11 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:22.286: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:24.287: INFO: 
Output of kubectl describe pod pod-network-test-8990/netserver-0:

Sep 14 19:52:24.287: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-8990 describe pod netserver-0 --namespace=pod-network-test-8990'
Sep 14 19:52:25.112: INFO: stderr: ""
... skipping 237 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m39s  default-scheduler  Successfully assigned pod-network-test-8990/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     3m38s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    3m38s  kubelet            Created container webserver
  Normal  Started    3m38s  kubelet            Started container webserver

Sep 14 19:52:27.842: FAIL: Error dialing UDP from node to pod: failed to find expected endpoints, 
tries 46
Command echo hostName | nc -w 1 -u 100.96.1.11 8081
retrieved map[]
expected map[netserver-0:{}]

Full Stack Trace
... skipping 191 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 14 19:52:27.843: Error dialing UDP from node to pod: failed to find expected endpoints, 
    tries 46
    Command echo hostName | nc -w 1 -u 100.96.1.11 8081
    retrieved map[]
    expected map[netserver-0:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":48,"failed":3,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
Sep 14 19:52:33.621: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 14 19:49:30.267: INFO: The status of Pod netserver-3 is Running (Ready = true)
STEP: Creating test pods
Sep 14 19:49:33.423: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4
Sep 14 19:49:33.424: INFO: Going to poll 100.96.1.13 on port 8080 at least 0 times, with a maximum of 46 tries before failing
Sep 14 19:49:33.568: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:33.568: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:35.518: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:35.519: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:37.666: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:37.666: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:39.636: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:39.636: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:41.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:41.785: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:43.737: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:43.737: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:45.882: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:45.882: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:47.847: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:47.847: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:49.993: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:49.993: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:51.979: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:51.979: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:54.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:54.124: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:49:56.111: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:49:56.112: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:49:58.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:49:58.256: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:00.397: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:00.397: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:02.542: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:02.542: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:04.622: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:04.622: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:06.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:06.769: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:08.922: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:08.922: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:11.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:11.067: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:13.035: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:13.035: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:15.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:15.181: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:17.192: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:17.193: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:19.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:19.338: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:21.286: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:21.286: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:23.432: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:23.432: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:25.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:25.417: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:27.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:27.562: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:29.581: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:29.582: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:31.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:31.726: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:34.245: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:34.245: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:36.390: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:36.390: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:38.430: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:38.430: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:40.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:40.574: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:42.547: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:42.547: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:44.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:44.692: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:46.916: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:46.916: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:49.060: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:49.060: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:51.198: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:51.198: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:53.345: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:53.345: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:55.296: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:55.297: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:50:57.442: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:50:57.442: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:50:59.627: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:50:59.627: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:01.772: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:01.772: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:04.425: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:04.425: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:06.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:06.570: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:08.665: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:08.665: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:10.819: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:10.819: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:12.779: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:12.779: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:14.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:14.924: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:16.911: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:16.911: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:19.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:19.057: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:20.993: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:20.993: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:23.137: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:23.137: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:25.220: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:25.220: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:27.364: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:27.364: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:29.316: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:29.316: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:31.461: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:31.462: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:33.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:33.416: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:35.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:35.562: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:37.535: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:37.535: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:39.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:39.681: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:41.664: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:41.664: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:43.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:43.810: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:45.762: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:45.762: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:47.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:47.908: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:49.911: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:49.911: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:52.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:52.056: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:54.047: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:54.047: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:51:56.192: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:51:56.192: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:51:58.145: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:51:58.146: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:00.293: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:00.293: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:02.287: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:02.287: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:04.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:04.433: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:06.525: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:06.525: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:08.676: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:08.676: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:10.714: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:10.714: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:12.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:12.859: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:14.838: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:14.838: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:16.983: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:16.983: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:18.929: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:18.930: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:21.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:21.075: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:23.066: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:23.066: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:25.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:25.212: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:27.174: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:27.174: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:29.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:29.319: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:31.275: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:31.275: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:33.420: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:33.420: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:35.379: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:35.380: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:37.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:37.525: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:39.509: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:39.510: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:41.655: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6103 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:52:41.655: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:52:43.647: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 14 19:52:43.647: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 14 19:52:45.648: INFO: 
Output of kubectl describe pod pod-network-test-6103/netserver-0:

Sep 14 19:52:45.648: INFO: Running '/tmp/kubectl3134405023/kubectl --server=https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-6103 describe pod netserver-0 --namespace=pod-network-test-6103'
Sep 14 19:52:46.467: INFO: stderr: ""
... skipping 237 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m39s  default-scheduler  Successfully assigned pod-network-test-6103/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     3m39s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    3m39s  kubelet            Created container webserver
  Normal  Started    3m39s  kubelet            Started container webserver

Sep 14 19:52:48.949: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 
tries 46
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

Full Stack Trace
... skipping 179 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 14 19:52:48.949: Error dialing HTTP node to pod failed to find expected endpoints, 
    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.13:8080/hostName
    retrieved map[]
    expected map[netserver-0:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":137,"failed":5,"failures":["[sig-network] Services should implement service.kubernetes.io/headless","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
Sep 14 19:52:54.839: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0914 19:50:27.986642    4799 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Sep 14 19:55:28.273: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 14 19:55:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1244" for this suite.


• [SLOW TEST:312.183 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":43,"skipped":304,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
Sep 14 19:55:28.569: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":24,"skipped":252,"failed":4,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:51:12.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
Sep 14 19:53:16.464: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8476.svc.cluster.local from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:53:46.610: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8476.svc.cluster.local from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:54:16.755: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8476.svc.cluster.local from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:54:46.901: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:55:17.047: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:55:47.193: INFO: Unable to read 100.64.114.137_udp@PTR from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: the server is currently unable to handle the request (get pods dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d)
Sep 14 19:56:15.887: FAIL: Unable to read 100.64.114.137_tcp@PTR from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: Get "https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8476/pods/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d/proxy/results/100.64.114.137_tcp@PTR": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0040b9c60, 0x298f500, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003ad30c8, 0xc0040b9c60, 0xc003ad30c8, 0xc0040b9c60)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000d00c00, 0x70c1ec8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0914 19:56:15.888426    4857 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep 14 19:56:15.887: Unable to read 100.64.114.137_tcp@PTR from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: Get \"https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8476/pods/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d/proxy/results/100.64.114.137_tcp@PTR\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0040b9c60, 0x298f500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003ad30c8, 0xc0040b9c60, 0xc003ad30c8, 0xc0040b9c60)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0040b9c60, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00393f080, 0x14, 0x18, 0x6eab9dd, 0x7, 0xc002c9f000, 0x7778c58, 0xc0041f5b80, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000324420, 0xc002c9f000, 0xc00393f080, 0x14, 0x18)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xe65\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000d00c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000d00c00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000d00c00, 0x70c1ec8)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a4be40, 0xc0044062c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6a4be40, 0xc0044062c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0035e22c0, 0x145, 0x865e7a4, 0x7d, 0xd3, 0xc001798800, 0x800)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a70a0, 0x759b830)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0035e22c0, 0x145, 0xc0040b96a0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0035e22c0, 0x145, 0xc0040b9788, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f4e23f, 0x24, 0xc0040b99e8, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc003ad3000, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0040b9c60, 0x298f500, 0x0, 0x0)
... skipping 57 lines ...
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:51:14 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Created: Created container webserver
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:51:14 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Created: Created container jessie-querier
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:51:14 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Started: Started container jessie-querier
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:56:15 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Killing: Stopping container webserver
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:56:15 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Killing: Stopping container jessie-querier
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:56:15 +0000 UTC - event for dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} Killing: Stopping container querier
Sep 14 19:56:16.489: INFO: At 2021-09-14 19:56:16 +0000 UTC - event for test-service-2: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint dns-8476/test-service-2: Operation cannot be fulfilled on endpoints "test-service-2": the object has been modified; please apply your changes to the latest version and try again
Sep 14 19:56:16.633: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 14 19:56:16.633: INFO: 
Sep 14 19:56:16.779: INFO: 
Logging node info for node ip-172-20-38-237.sa-east-1.compute.internal
Sep 14 19:56:16.924: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-38-237.sa-east-1.compute.internal    1e800009-31a2-4b7e-a75a-f41012e28762 45883 0 2021-09-14 19:19:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/instancegroup:master-sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-38-237.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{protokube Update v1 2021-09-14 19:19:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-09-14 19:19:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}}} {kops-controller Update v1 2021-09-14 19:19:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}} {kubelet Update v1 2021-09-14 19:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02430e901bb78d60b,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872706560 0} {<nil>} 3781940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3767848960 0} {<nil>} 3679540Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-09-14 19:55:18 +0000 UTC,LastTransitionTime:2021-09-14 19:19:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-09-14 19:55:18 +0000 UTC,LastTransitionTime:2021-09-14 19:19:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-09-14 19:55:18 +0000 UTC,LastTransitionTime:2021-09-14 19:19:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-14 19:55:18 +0000 UTC,LastTransitionTime:2021-09-14 19:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.38.237,},NodeAddress{Type:ExternalIP,Address:52.67.190.173,},NodeAddress{Type:Hostname,Address:ip-172-20-38-237.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-38-237.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-190-173.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2479a1c3270f264028a6995d8b2bc1,SystemUUID:ec2479a1-c327-0f26-4028-a6995d8b2bc1,BootID:273f82a2-771e-4078-a92c-90735aa7a38d,KernelVersion:5.10.61-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.3 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.4,KubeProxyVersion:v1.21.4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:ebb73d3d4a99da609f9e01c556cd9f9aa7a0aecba8f5bc5588d7c45eb38e3a7e k8s.gcr.io/etcdadm/etcd-manager:3.0.20210430],SizeBytes:171082409,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.21.4],SizeBytes:126880221,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.21.4],SizeBytes:121092419,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.21.1],SizeBytes:113860118,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.21.1],SizeBytes:112068119,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.4],SizeBytes:105127625,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.21.4],SizeBytes:51890488,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.21.1],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Sep 14 19:56:16.924: INFO: 
... skipping 113 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:56:15.887: Unable to read 100.64.114.137_tcp@PTR from pod dns-8476/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d: Get "https://api.e2e-c4ce364831-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-8476/pods/dns-test-16e2a4e9-73bf-4196-aa79-00a16c65983d/proxy/results/100.64.114.137_tcp@PTR": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":24,"skipped":252,"failed":5,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
Sep 14 19:56:22.307: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 14 19:50:54.969: INFO: PersistentVolumeClaim pvc-59cwv found and phase=Bound (6.574864636s)
Sep 14 19:50:54.969: INFO: Waiting up to 3m0s for PersistentVolume nfs-ms8tl to have phase Bound
Sep 14 19:50:55.112: INFO: PersistentVolume nfs-ms8tl found and phase=Bound (142.758204ms)
STEP: Checking pod has write access to PersistentVolume
Sep 14 19:50:55.398: INFO: Creating nfs test pod
Sep 14 19:50:55.542: INFO: Pod should terminate with exitcode 0 (success)
Sep 14 19:50:55.542: INFO: Waiting up to 5m0s for pod "pvc-tester-n24zf" in namespace "pv-7000" to be "Succeeded or Failed"
Sep 14 19:50:55.685: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 142.8461ms
Sep 14 19:50:57.829: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286402785s
Sep 14 19:50:59.974: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432096836s
Sep 14 19:51:02.118: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576021026s
Sep 14 19:51:04.263: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720967233s
Sep 14 19:51:06.408: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.865870048s
... skipping 130 lines ...
Sep 14 19:55:47.346: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.803975379s
Sep 14 19:55:49.490: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.948362097s
Sep 14 19:55:51.635: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.09251051s
Sep 14 19:55:53.779: INFO: Pod "pvc-tester-n24zf": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.236655889s
Sep 14 19:55:55.780: INFO: Deleting pod "pvc-tester-n24zf" in namespace "pv-7000"
Sep 14 19:55:55.928: INFO: Wait up to 5m0s for pod "pvc-tester-n24zf" to be fully deleted
Sep 14 19:56:08.215: FAIL: Unexpected error:
    <*errors.errorString | 0xc003614fd0>: {
        s: "pod \"pvc-tester-n24zf\" did not exit with Success: pod \"pvc-tester-n24zf\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-n24zf\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-n24zf" did not exit with Success: pod "pvc-tester-n24zf" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-n24zf" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc003abef20, 0x7778c58, 0xc003d44160, 0xc00395c6a9, 0x7, 0xc000f22280, 0xc003323880)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.4()
... skipping 17 lines ...
Sep 14 19:56:08.650: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "pv-7000".
STEP: Found 10 events.
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:42 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-7000/nfs-server to ip-172-20-50-202.sa-east-1.compute.internal
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:44 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-qqx25" : failed to sync configmap cache: timed out waiting for the condition
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:45 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:45 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Created: Created container nfs-server
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:45 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Started: Started container nfs-server
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:48 +0000 UTC - event for pvc-59cwv: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:50:55 +0000 UTC - event for pvc-tester-n24zf: {default-scheduler } Scheduled: Successfully assigned pv-7000/pvc-tester-n24zf to ip-172-20-48-93.sa-east-1.compute.internal
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:52:58 +0000 UTC - event for pvc-tester-n24zf: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-nlhnq]: timed out waiting for the condition
Sep 14 19:56:19.083: INFO: At 2021-09-14 19:53:56 +0000 UTC - event for pvc-tester-n24zf: {kubelet ip-172-20-48-93.sa-east-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-ms8tl" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.143:/exports /var/lib/kubelet/pods/d64930eb-3efa-4ebc-8f05-a427e9dd3e08/volumes/kubernetes.io~nfs/nfs-ms8tl
Output: mount.nfs: Connection timed out

Sep 14 19:56:19.083: INFO: At 2021-09-14 19:56:08 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-202.sa-east-1.compute.internal} Killing: Stopping container nfs-server
Sep 14 19:56:19.226: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187

      Sep 14 19:56:08.216: Unexpected error:
          <*errors.errorString | 0xc003614fd0>: {
              s: "pod \"pvc-tester-n24zf\" did not exit with Success: pod \"pvc-tester-n24zf\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-n24zf\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-n24zf" did not exit with Success: pod "pvc-tester-n24zf" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-n24zf" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":49,"skipped":446,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}
Sep 14 19:56:24.580: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":33,"skipped":191,"failed":2,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 14 19:36:30.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
Sep 14 19:38:03.808: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:38:33.951: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:39:04.095: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:39:34.239: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:40:04.383: INFO: Unable to read jessie_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:40:34.526: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:40:34.526: INFO: Lookups using dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 14 19:41:09.674: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:41:39.818: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:42:09.961: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:42:40.106: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:43:10.249: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:43:40.392: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:44:10.537: INFO: Unable to read jessie_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:44:40.680: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:44:40.680: INFO: Lookups using dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 14 19:45:14.671: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:45:44.815: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:46:14.959: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:46:45.103: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:47:15.248: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:47:45.391: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:48:15.536: INFO: Unable to read jessie_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:48:45.680: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:48:45.681: INFO: Lookups using dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 14 19:49:19.672: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:49:49.816: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:50:19.960: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:50:50.103: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:51:20.248: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:51:50.393: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:52:20.544: INFO: Unable to read jessie_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:52:50.689: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:52:50.689: INFO: Lookups using dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 14 19:53:20.833: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:53:50.979: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:54:21.124: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:54:51.268: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:55:21.412: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:55:51.557: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:56:21.702: INFO: Unable to read jessie_udp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:56:51.846: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c: the server is currently unable to handle the request (get pods dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c)
Sep 14 19:56:51.846: INFO: Lookups using dns-4618/dns-test-dffcaee5-0147-4e49-96f2-95073e69b38c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 14 19:56:51.847: FAIL: Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 146 lines ...
• Failure [1227.531 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 14 19:56:51.847: Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":33,"skipped":191,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}
Sep 14 19:56:57.623: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 277 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  39s   default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     38s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    38s   kubelet            Created container webserver
  Normal  Started    38s   kubelet            Started container webserver

Sep 14 19:37:43.708: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.3.185&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 14 19:37:43.708: INFO: ...failed...will try again in next pass
Sep 14 19:37:43.708: INFO: Breadth first check of 100.96.4.198 on host 172.20.48.93...
Sep 14 19:37:43.852: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.4.198&port=8081&tries=1'] Namespace:pod-network-test-237 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:37:43.852: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:37:49.803: INFO: Waiting for responses: map[netserver-2:{}]
Sep 14 19:37:51.804: INFO: 
Output of kubectl describe pod pod-network-test-237/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  51s   default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     50s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    50s   kubelet            Created container webserver
  Normal  Started    50s   kubelet            Started container webserver

Sep 14 19:37:55.050: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.4.198&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 14 19:37:55.050: INFO: ...failed...will try again in next pass
Sep 14 19:37:55.050: INFO: Breadth first check of 100.96.2.227 on host 172.20.50.202...
Sep 14 19:37:55.194: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.2.227&port=8081&tries=1'] Namespace:pod-network-test-237 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:37:55.194: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:38:01.133: INFO: Waiting for responses: map[netserver-3:{}]
Sep 14 19:38:03.134: INFO: 
Output of kubectl describe pod pod-network-test-237/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  62s   default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     61s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    61s   kubelet            Created container webserver
  Normal  Started    61s   kubelet            Started container webserver

Sep 14 19:38:06.410: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.2.227&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Sep 14 19:38:06.410: INFO: ...failed...will try again in next pass
Sep 14 19:38:06.410: INFO: Going to retry 3 out of 4 pods....
Sep 14 19:38:06.410: INFO: Doublechecking 1 pods in host 172.20.48.74 which werent seen the first time.
Sep 14 19:38:06.410: INFO: Now attempting to probe pod [[[ 100.96.3.185 ]]]
Sep 14 19:38:06.555: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.3.185&port=8081&tries=1'] Namespace:pod-network-test-237 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 14 19:38:06.555: INFO: >>> kubeConfig: /root/.kube/config
Sep 14 19:38:12.491: INFO: Waiting for responses: map[netserver-1:{}]
... skipping 377 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m19s  default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     7m18s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    7m18s  kubelet            Created container webserver
  Normal  Started    7m18s  kubelet            Started container webserver

Sep 14 19:44:23.316: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.3.185&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 14 19:44:23.316: INFO: ... Done probing pod [[[ 100.96.3.185 ]]]
Sep 14 19:44:23.316: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    13m   kubelet            Created container webserver
  Normal  Started    13m   kubelet            Started container webserver

Sep 14 19:50:39.737: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.4.198&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 14 19:50:39.737: INFO: ... Done probing pod [[[ 100.96.4.198 ]]]
Sep 14 19:50:39.737: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned pod-network-test-237/netserver-3 to ip-172-20-50-202.sa-east-1.compute.internal
  Normal  Pulled     19m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    19m   kubelet            Created container webserver
  Normal  Started    19m   kubelet            Started container webserver

Sep 14 19:56:56.957: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.2.227&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Sep 14 19:56:56.957: INFO: ... Done probing pod [[[ 100.96.2.227 ]]]
Sep 14 19:56:56.957: INFO: succeeded at polling 1 out of 4 connections
Sep 14 19:56:56.957: INFO: pod polling failure summary:
Sep 14 19:56:56.957: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.3.185&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Sep 14 19:56:56.957: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.4.198&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Sep 14 19:56:56.957: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.175:9080/dial?request=hostname&protocol=udp&host=100.96.2.227&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}]
Sep 14 19:56:56.957: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a14480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 14 19:56:56.957: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":95,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Sep 14 19:57:03.037: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-ae1d8ad6-6f55-44a8-bd28-d1e6f7d90b21]
STEP: Verifying pods for RC slow-terminating-unready-pod
Sep 14 19:40:56.100: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Sep 14 19:41:28.820: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:42:01.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:42:33.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:43:05.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:43:37.261: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:44:09.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:44:41.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:45:13.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:45:45.255: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:46:17.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:46:49.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:47:21.258: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:47:53.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:48:25.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:48:57.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:49:29.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:50:01.254: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:50:33.252: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:51:05.253: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc002408f6c)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 14 19:51:37.255: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-s47hs]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-s47hs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767245255, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.93", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc002522d98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003116720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(