This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-28 19:15
Elapsed57m7s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 135 lines ...
I0928 19:16:22.341389    4728 up.go:43] Cleaning up any leaked resources from previous cluster
I0928 19:16:22.341420    4728 dumplogs.go:40] /logs/artifacts/5fa0a8b6-2090-11ec-b06e-0a54576c5767/kops toolbox dump --name e2e-b08e534318-62691.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I0928 19:16:22.357731    4749 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0928 19:16:22.357840    4749 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-b08e534318-62691.test-cncf-aws.k8s.io" not found
W0928 19:16:22.917431    4728 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0928 19:16:22.917472    4728 down.go:48] /logs/artifacts/5fa0a8b6-2090-11ec-b06e-0a54576c5767/kops delete cluster --name e2e-b08e534318-62691.test-cncf-aws.k8s.io --yes
I0928 19:16:22.932314    4759 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0928 19:16:22.932395    4759 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-b08e534318-62691.test-cncf-aws.k8s.io" not found
I0928 19:16:23.446711    4728 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/28 19:16:23 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0928 19:16:23.453187    4728 http.go:37] curl https://ip.jsb.workers.dev
I0928 19:16:23.540375    4728 up.go:144] /logs/artifacts/5fa0a8b6-2090-11ec-b06e-0a54576c5767/kops create cluster --name e2e-b08e534318-62691.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2905.2.4-hvm --channel=alpha --networking=kopeio --container-runtime=containerd --admin-access 35.225.158.70/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-east-1a --master-size c5.large
I0928 19:16:23.555909    4767 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0928 19:16:23.556024    4767 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0928 19:16:23.598790    4767 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0928 19:16:24.210259    4767 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0928 19:16:51.215890    4728 up.go:181] /logs/artifacts/5fa0a8b6-2090-11ec-b06e-0a54576c5767/kops validate cluster --name e2e-b08e534318-62691.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I0928 19:16:51.230915    4787 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0928 19:16:51.230998    4787 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-b08e534318-62691.test-cncf-aws.k8s.io

W0928 19:16:52.365451    4787 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-b08e534318-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0928 19:17:02.400418    4787 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-b08e534318-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:17:12.435568    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:17:22.492959    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:17:32.536389    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:17:42.569412    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:17:52.604524    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:02.637997    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:12.670910    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:22.717138    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:32.750686    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:42.792850    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:18:52.824090    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:02.897249    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:12.942561    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:22.970904    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:32.997557    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:43.037575    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:19:53.069314    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:20:03.108434    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:20:13.140206    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0928 19:20:23.168197    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 9 lines ...
Machine	i-0d18796061afbe613				machine "i-0d18796061afbe613" has not yet joined cluster
Node	ip-172-20-36-158.ec2.internal			node "ip-172-20-36-158.ec2.internal" of role "node" is not ready
Node	ip-172-20-62-211.ec2.internal			node "ip-172-20-62-211.ec2.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-ts5s8		system-cluster-critical pod "coredns-5dc785954d-ts5s8" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-7ghkz	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-7ghkz" is pending

Validation Failed
W0928 19:20:34.763998    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 5 lines ...

VALIDATION ERRORS
KIND	NAME			MESSAGE
Machine	i-0a334a9c3d3b045be	machine "i-0a334a9c3d3b045be" has not yet joined cluster
Machine	i-0d18796061afbe613	machine "i-0d18796061afbe613" has not yet joined cluster

Validation Failed
W0928 19:20:45.890404    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 6 lines ...
ip-172-20-62-211.ec2.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/kopeio-networking-agent-jfqrf	system-node-critical pod "kopeio-networking-agent-jfqrf" is pending

Validation Failed
W0928 19:20:56.984878    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 36 lines ...
ip-172-20-62-211.ec2.internal	node	True

VALIDATION ERRORS
KIND	NAME							MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-61-119.ec2.internal	system-node-critical pod "kube-proxy-ip-172-20-61-119.ec2.internal" is pending

Validation Failed
W0928 19:21:30.730247    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 6 lines ...
ip-172-20-62-211.ec2.internal	node	True

VALIDATION ERRORS
KIND	NAME							MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-50-189.ec2.internal	system-node-critical pod "kube-proxy-ip-172-20-50-189.ec2.internal" is pending

Validation Failed
W0928 19:21:41.834149    4787 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-east-1a	Master	c5.large	1	1	us-east-1a
nodes-us-east-1a	Node	t3.medium	4	4	us-east-1a

... skipping 996 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:01.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:01.381: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3004" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:01.869: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:02.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6538" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:03.097: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
Sep 28 19:24:03.594: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-5ac6f7ac-9ff0-4201-bab4-fb9ff9767adf
STEP: Creating a pod to test consume configMaps
Sep 28 19:24:03.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2" in namespace "projected-6796" to be "Succeeded or Failed"
Sep 28 19:24:03.772: INFO: Pod "pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.251499ms
Sep 28 19:24:05.808: INFO: Pod "pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071037137s
Sep 28 19:24:07.844: INFO: Pod "pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10767361s
STEP: Saw pod success
Sep 28 19:24:07.844: INFO: Pod "pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2" satisfied condition "Succeeded or Failed"
Sep 28 19:24:07.880: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2 container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:24:07.956: INFO: Waiting for pod pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2 to disappear
Sep 28 19:24:07.991: INFO: Pod pod-projected-configmaps-8351d28b-d2b2-4eaf-8c4a-d3889ceb7bd2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.646 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:08.083: INFO: Only supported for providers [azure] (not aws)
... skipping 161 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:24:02.107: INFO: Waiting up to 5m0s for pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89" in namespace "projected-6421" to be "Succeeded or Failed"
Sep 28 19:24:02.144: INFO: Pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89": Phase="Pending", Reason="", readiness=false. Elapsed: 36.389994ms
Sep 28 19:24:04.181: INFO: Pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074008332s
Sep 28 19:24:06.218: INFO: Pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110934974s
Sep 28 19:24:08.255: INFO: Pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14766541s
STEP: Saw pod success
Sep 28 19:24:08.255: INFO: Pod "metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89" satisfied condition "Succeeded or Failed"
Sep 28 19:24:08.291: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89 container client-container: <nil>
STEP: delete the pod
Sep 28 19:24:08.665: INFO: Waiting for pod metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89 to disappear
Sep 28 19:24:08.702: INFO: Pod metadata-volume-5b70bed5-4a7c-4346-96ea-ee4a5e505f89 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.582 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:24:01.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-da977374-c2a2-4e6e-a0e4-22980cf420cb
STEP: Creating a pod to test consume configMaps
Sep 28 19:24:04.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8" in namespace "projected-4317" to be "Succeeded or Failed"
Sep 28 19:24:04.290: INFO: Pod "pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.80468ms
Sep 28 19:24:06.326: INFO: Pod "pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072505339s
Sep 28 19:24:08.363: INFO: Pod "pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1095797s
STEP: Saw pod success
Sep 28 19:24:08.363: INFO: Pod "pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8" satisfied condition "Succeeded or Failed"
Sep 28 19:24:08.399: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8 container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:24:08.742: INFO: Waiting for pod pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8 to disappear
Sep 28 19:24:08.778: INFO: Pod pod-projected-configmaps-5926ee0a-61e2-4515-8dcc-cf53d44dd9c8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 16 lines ...
Sep 28 19:24:01.265: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-93f0b272-6564-4d23-a0d9-135a958942c6
STEP: Creating a pod to test consume secrets
Sep 28 19:24:01.415: INFO: Waiting up to 5m0s for pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd" in namespace "secrets-6139" to be "Succeeded or Failed"
Sep 28 19:24:01.461: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.056059ms
Sep 28 19:24:03.497: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082636248s
Sep 28 19:24:05.534: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119497004s
Sep 28 19:24:07.573: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158625619s
Sep 28 19:24:09.610: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.195430776s
STEP: Saw pod success
Sep 28 19:24:09.610: INFO: Pod "pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd" satisfied condition "Succeeded or Failed"
Sep 28 19:24:09.648: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd container secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:24:09.724: INFO: Waiting for pod pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd to disappear
Sep 28 19:24:09.759: INFO: Pod pod-secrets-0b90a5e1-837c-4cf5-8d47-cd704030e2fd no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.753 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 28 19:24:01.481: INFO: Waiting up to 5m0s for pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b" in namespace "emptydir-8756" to be "Succeeded or Failed"
Sep 28 19:24:01.520: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.99246ms
Sep 28 19:24:03.559: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077944339s
Sep 28 19:24:05.599: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117455842s
Sep 28 19:24:07.639: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15758104s
Sep 28 19:24:09.678: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.196818195s
STEP: Saw pod success
Sep 28 19:24:09.678: INFO: Pod "pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b" satisfied condition "Succeeded or Failed"
Sep 28 19:24:09.716: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b container test-container: <nil>
STEP: delete the pod
Sep 28 19:24:09.798: INFO: Waiting for pod pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b to disappear
Sep 28 19:24:09.837: INFO: Pod pod-07b73f4e-ba8b-4e26-86cc-310ad1d0763b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:09.962: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:24:03.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Sep 28 19:24:05.077: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9258" to be "Succeeded or Failed"
Sep 28 19:24:05.118: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 40.467052ms
Sep 28 19:24:07.157: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079450408s
Sep 28 19:24:09.198: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12021787s
Sep 28 19:24:11.238: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160292876s
STEP: Saw pod success
Sep 28 19:24:11.238: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 28 19:24:11.276: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 28 19:24:11.359: INFO: Waiting for pod pod-host-path-test to disappear
Sep 28 19:24:11.396: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.663 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":2,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Destroying namespace "services-4317" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:12.411: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 59 lines ...
• [SLOW TEST:13.797 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:15.069: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 106 lines ...
• [SLOW TEST:8.765 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:12.616 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:11.584 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:21.605: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:22.891: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 66 lines ...
Sep 28 19:24:11.508: INFO: PersistentVolumeClaim pvc-tkfrw found but phase is Pending instead of Bound.
Sep 28 19:24:13.546: INFO: PersistentVolumeClaim pvc-tkfrw found and phase=Bound (2.076392726s)
Sep 28 19:24:13.546: INFO: Waiting up to 3m0s for PersistentVolume local-cmxhk to have phase Bound
Sep 28 19:24:13.585: INFO: PersistentVolume local-cmxhk found and phase=Bound (38.680712ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gbhf
STEP: Creating a pod to test subpath
Sep 28 19:24:13.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gbhf" in namespace "provisioning-8525" to be "Succeeded or Failed"
Sep 28 19:24:13.742: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.801494ms
Sep 28 19:24:15.782: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078435714s
Sep 28 19:24:17.823: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118872877s
Sep 28 19:24:19.863: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158930926s
Sep 28 19:24:21.925: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.221373246s
STEP: Saw pod success
Sep 28 19:24:21.925: INFO: Pod "pod-subpath-test-preprovisionedpv-gbhf" satisfied condition "Succeeded or Failed"
Sep 28 19:24:21.964: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-gbhf container test-container-volume-preprovisionedpv-gbhf: <nil>
STEP: delete the pod
Sep 28 19:24:22.049: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gbhf to disappear
Sep 28 19:24:22.090: INFO: Pod pod-subpath-test-preprovisionedpv-gbhf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gbhf
Sep 28 19:24:22.090: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gbhf" in namespace "provisioning-8525"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 23 lines ...
Sep 28 19:24:11.351: INFO: PersistentVolumeClaim pvc-g76bk found but phase is Pending instead of Bound.
Sep 28 19:24:13.391: INFO: PersistentVolumeClaim pvc-g76bk found and phase=Bound (4.116817462s)
Sep 28 19:24:13.391: INFO: Waiting up to 3m0s for PersistentVolume local-qwvh7 to have phase Bound
Sep 28 19:24:13.432: INFO: PersistentVolume local-qwvh7 found and phase=Bound (40.503397ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-24xg
STEP: Creating a pod to test exec-volume-test
Sep 28 19:24:13.549: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-24xg" in namespace "volume-3598" to be "Succeeded or Failed"
Sep 28 19:24:13.587: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 38.447535ms
Sep 28 19:24:15.627: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077701299s
Sep 28 19:24:17.666: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117468985s
Sep 28 19:24:19.707: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158088757s
Sep 28 19:24:21.746: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197495977s
Sep 28 19:24:23.788: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238825131s
Sep 28 19:24:25.829: INFO: Pod "exec-volume-test-preprovisionedpv-24xg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.279544611s
STEP: Saw pod success
Sep 28 19:24:25.829: INFO: Pod "exec-volume-test-preprovisionedpv-24xg" satisfied condition "Succeeded or Failed"
Sep 28 19:24:25.867: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod exec-volume-test-preprovisionedpv-24xg container exec-container-preprovisionedpv-24xg: <nil>
STEP: delete the pod
Sep 28 19:24:26.046: INFO: Waiting for pod exec-volume-test-preprovisionedpv-24xg to disappear
Sep 28 19:24:26.093: INFO: Pod exec-volume-test-preprovisionedpv-24xg no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-24xg
Sep 28 19:24:26.093: INFO: Deleting pod "exec-volume-test-preprovisionedpv-24xg" in namespace "volume-3598"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:27.628: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 129 lines ...
• [SLOW TEST:19.441 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:28.317: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-2625/secret-test-b3a8080d-1e55-4131-9b42-ed6492b2bec2
STEP: Creating a pod to test consume secrets
Sep 28 19:24:23.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c" in namespace "secrets-2625" to be "Succeeded or Failed"
Sep 28 19:24:23.843: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.668768ms
Sep 28 19:24:25.881: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078057083s
Sep 28 19:24:27.921: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117632303s
Sep 28 19:24:29.959: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155858851s
Sep 28 19:24:31.997: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194088777s
Sep 28 19:24:34.047: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244509864s
STEP: Saw pod success
Sep 28 19:24:34.048: INFO: Pod "pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c" satisfied condition "Succeeded or Failed"
Sep 28 19:24:34.089: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c container env-test: <nil>
STEP: delete the pod
Sep 28 19:24:34.172: INFO: Waiting for pod pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c to disappear
Sep 28 19:24:34.210: INFO: Pod pod-configmaps-c87669d9-7d80-456b-88f0-5fcaf313307c no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.772 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:34.307: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Sep 28 19:24:10.838: INFO: PersistentVolumeClaim pvc-dql8f found but phase is Pending instead of Bound.
Sep 28 19:24:12.969: INFO: PersistentVolumeClaim pvc-dql8f found and phase=Bound (6.245518955s)
Sep 28 19:24:12.969: INFO: Waiting up to 3m0s for PersistentVolume local-rqhx7 to have phase Bound
Sep 28 19:24:13.006: INFO: PersistentVolume local-rqhx7 found and phase=Bound (37.768504ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-g6wg
STEP: Creating a pod to test subpath
Sep 28 19:24:13.123: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-g6wg" in namespace "provisioning-5593" to be "Succeeded or Failed"
Sep 28 19:24:13.161: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 37.735864ms
Sep 28 19:24:15.201: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078050526s
Sep 28 19:24:17.239: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116050645s
Sep 28 19:24:19.277: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15430875s
Sep 28 19:24:21.315: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192354777s
Sep 28 19:24:23.354: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.230788533s
Sep 28 19:24:25.392: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.269630275s
Sep 28 19:24:27.432: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.309212067s
STEP: Saw pod success
Sep 28 19:24:27.432: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg" satisfied condition "Succeeded or Failed"
Sep 28 19:24:27.470: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-preprovisionedpv-g6wg container test-container-subpath-preprovisionedpv-g6wg: <nil>
STEP: delete the pod
Sep 28 19:24:27.551: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-g6wg to disappear
Sep 28 19:24:27.588: INFO: Pod pod-subpath-test-preprovisionedpv-g6wg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-g6wg
Sep 28 19:24:27.588: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-g6wg" in namespace "provisioning-5593"
STEP: Creating pod pod-subpath-test-preprovisionedpv-g6wg
STEP: Creating a pod to test subpath
Sep 28 19:24:27.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-g6wg" in namespace "provisioning-5593" to be "Succeeded or Failed"
Sep 28 19:24:27.702: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 37.224073ms
Sep 28 19:24:29.740: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075509625s
Sep 28 19:24:31.778: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113418667s
Sep 28 19:24:33.816: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151415381s
Sep 28 19:24:35.853: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.189113822s
STEP: Saw pod success
Sep 28 19:24:35.854: INFO: Pod "pod-subpath-test-preprovisionedpv-g6wg" satisfied condition "Succeeded or Failed"
Sep 28 19:24:35.891: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-preprovisionedpv-g6wg container test-container-subpath-preprovisionedpv-g6wg: <nil>
STEP: delete the pod
Sep 28 19:24:35.970: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-g6wg to disappear
Sep 28 19:24:36.008: INFO: Pod pod-subpath-test-preprovisionedpv-g6wg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-g6wg
Sep 28 19:24:36.008: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-g6wg" in namespace "provisioning-5593"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:36.909: INFO: Only supported for providers [vsphere] (not aws)
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:38.121: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
Sep 28 19:24:25.746: INFO: PersistentVolumeClaim pvc-2sp7v found but phase is Pending instead of Bound.
Sep 28 19:24:27.782: INFO: PersistentVolumeClaim pvc-2sp7v found and phase=Bound (14.29162233s)
Sep 28 19:24:27.782: INFO: Waiting up to 3m0s for PersistentVolume local-79z9v to have phase Bound
Sep 28 19:24:27.818: INFO: PersistentVolume local-79z9v found and phase=Bound (35.548414ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fqgz
STEP: Creating a pod to test exec-volume-test
Sep 28 19:24:27.935: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fqgz" in namespace "volume-4929" to be "Succeeded or Failed"
Sep 28 19:24:27.970: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 35.776807ms
Sep 28 19:24:30.006: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071456952s
Sep 28 19:24:32.043: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108716816s
Sep 28 19:24:34.080: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145470796s
Sep 28 19:24:36.116: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181232121s
Sep 28 19:24:38.153: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21794871s
STEP: Saw pod success
Sep 28 19:24:38.153: INFO: Pod "exec-volume-test-preprovisionedpv-fqgz" satisfied condition "Succeeded or Failed"
Sep 28 19:24:38.190: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod exec-volume-test-preprovisionedpv-fqgz container exec-container-preprovisionedpv-fqgz: <nil>
STEP: delete the pod
Sep 28 19:24:38.276: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fqgz to disappear
Sep 28 19:24:38.312: INFO: Pod exec-volume-test-preprovisionedpv-fqgz no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fqgz
Sep 28 19:24:38.312: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fqgz" in namespace "volume-4929"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:24:39.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8627" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:41.477: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 72 lines ...
Sep 28 19:24:26.719: INFO: PersistentVolumeClaim pvc-jflbv found but phase is Pending instead of Bound.
Sep 28 19:24:28.758: INFO: PersistentVolumeClaim pvc-jflbv found and phase=Bound (16.359094054s)
Sep 28 19:24:28.758: INFO: Waiting up to 3m0s for PersistentVolume local-btwbr to have phase Bound
Sep 28 19:24:28.796: INFO: PersistentVolume local-btwbr found and phase=Bound (38.178644ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9bzf
STEP: Creating a pod to test subpath
Sep 28 19:24:28.912: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9bzf" in namespace "provisioning-982" to be "Succeeded or Failed"
Sep 28 19:24:28.951: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.563456ms
Sep 28 19:24:30.990: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077774519s
Sep 28 19:24:33.029: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116737705s
Sep 28 19:24:35.071: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158945265s
Sep 28 19:24:37.110: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198375997s
Sep 28 19:24:39.151: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238549575s
Sep 28 19:24:41.190: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.277588893s
STEP: Saw pod success
Sep 28 19:24:41.190: INFO: Pod "pod-subpath-test-preprovisionedpv-9bzf" satisfied condition "Succeeded or Failed"
Sep 28 19:24:41.230: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-9bzf container test-container-volume-preprovisionedpv-9bzf: <nil>
STEP: delete the pod
Sep 28 19:24:41.318: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9bzf to disappear
Sep 28 19:24:41.356: INFO: Pod pod-subpath-test-preprovisionedpv-9bzf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9bzf
Sep 28 19:24:41.356: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9bzf" in namespace "provisioning-982"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:42.934: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:24:41.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5" in namespace "projected-8109" to be "Succeeded or Failed"
Sep 28 19:24:41.774: INFO: Pod "downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.902597ms
Sep 28 19:24:43.811: INFO: Pod "downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072652686s
STEP: Saw pod success
Sep 28 19:24:43.811: INFO: Pod "downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5" satisfied condition "Succeeded or Failed"
Sep 28 19:24:43.848: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5 container client-container: <nil>
STEP: delete the pod
Sep 28 19:24:43.936: INFO: Waiting for pod downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5 to disappear
Sep 28 19:24:43.976: INFO: Pod downwardapi-volume-754edc6c-9df7-4dde-ae2b-dd60b6daf8d5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:43.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8109" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:44.059: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 142 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:24:46.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":2,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Sep 28 19:24:10.818: INFO: PersistentVolumeClaim pvc-cstt2 found but phase is Pending instead of Bound.
Sep 28 19:24:12.955: INFO: PersistentVolumeClaim pvc-cstt2 found and phase=Bound (2.173193154s)
Sep 28 19:24:12.955: INFO: Waiting up to 3m0s for PersistentVolume local-mcn6w to have phase Bound
Sep 28 19:24:12.993: INFO: PersistentVolume local-mcn6w found and phase=Bound (37.858715ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7jlx
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:24:13.107: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7jlx" in namespace "provisioning-9243" to be "Succeeded or Failed"
Sep 28 19:24:13.145: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 37.683526ms
Sep 28 19:24:15.182: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075499176s
Sep 28 19:24:17.221: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114285023s
Sep 28 19:24:19.259: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152431161s
Sep 28 19:24:21.297: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190113113s
Sep 28 19:24:23.335: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.227902231s
... skipping 6 lines ...
Sep 28 19:24:37.600: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Running", Reason="", readiness=true. Elapsed: 24.492784754s
Sep 28 19:24:39.637: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Running", Reason="", readiness=true. Elapsed: 26.53028548s
Sep 28 19:24:41.675: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Running", Reason="", readiness=true. Elapsed: 28.567963047s
Sep 28 19:24:43.712: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Running", Reason="", readiness=true. Elapsed: 30.605215716s
Sep 28 19:24:45.750: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.642735766s
STEP: Saw pod success
Sep 28 19:24:45.750: INFO: Pod "pod-subpath-test-preprovisionedpv-7jlx" satisfied condition "Succeeded or Failed"
Sep 28 19:24:45.787: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-7jlx container test-container-subpath-preprovisionedpv-7jlx: <nil>
STEP: delete the pod
Sep 28 19:24:45.952: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7jlx to disappear
Sep 28 19:24:45.989: INFO: Pod pod-subpath-test-preprovisionedpv-7jlx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7jlx
Sep 28 19:24:45.989: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7jlx" in namespace "provisioning-9243"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:46.601: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 176 lines ...
Sep 28 19:24:40.132: INFO: PersistentVolumeClaim pvc-xb6rf found but phase is Pending instead of Bound.
Sep 28 19:24:42.170: INFO: PersistentVolumeClaim pvc-xb6rf found and phase=Bound (4.112425988s)
Sep 28 19:24:42.171: INFO: Waiting up to 3m0s for PersistentVolume local-7vv9g to have phase Bound
Sep 28 19:24:42.207: INFO: PersistentVolume local-7vv9g found and phase=Bound (36.441152ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-69c4
STEP: Creating a pod to test subpath
Sep 28 19:24:42.317: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-69c4" in namespace "provisioning-392" to be "Succeeded or Failed"
Sep 28 19:24:42.354: INFO: Pod "pod-subpath-test-preprovisionedpv-69c4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.063907ms
Sep 28 19:24:44.392: INFO: Pod "pod-subpath-test-preprovisionedpv-69c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074956072s
Sep 28 19:24:46.429: INFO: Pod "pod-subpath-test-preprovisionedpv-69c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111783556s
STEP: Saw pod success
Sep 28 19:24:46.429: INFO: Pod "pod-subpath-test-preprovisionedpv-69c4" satisfied condition "Succeeded or Failed"
Sep 28 19:24:46.466: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-69c4 container test-container-volume-preprovisionedpv-69c4: <nil>
STEP: delete the pod
Sep 28 19:24:46.556: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-69c4 to disappear
Sep 28 19:24:46.597: INFO: Pod pod-subpath-test-preprovisionedpv-69c4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-69c4
Sep 28 19:24:46.597: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-69c4" in namespace "provisioning-392"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:48.413: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
Sep 28 19:24:47.192: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 28 19:24:47.192: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8541 describe pod agnhost-primary-qtnch'
Sep 28 19:24:47.481: INFO: stderr: ""
Sep 28 19:24:47.481: INFO: stdout: "Name:         agnhost-primary-qtnch\nNamespace:    kubectl-8541\nPriority:     0\nNode:         ip-172-20-36-158.ec2.internal/172.20.36.158\nStart Time:   Tue, 28 Sep 2021 19:24:38 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.1.39\nIPs:\n  IP:           100.96.1.39\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://62cd25768bf3b856e1ec96af4790586eed1f0bb7c90ddd4e598a9e9e318bc51b\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 28 Sep 2021 19:24:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pvtm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-2pvtm:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  9s    default-scheduler  Successfully assigned kubectl-8541/agnhost-primary-qtnch to ip-172-20-36-158.ec2.internal\n  Normal  Pulled     8s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    8s    kubelet            Created container agnhost-primary\n  Normal  Started    8s    kubelet            Started container agnhost-primary\n"
Sep 28 19:24:47.481: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8541 describe rc agnhost-primary'
Sep 28 19:24:47.807: INFO: stderr: ""
Sep 28 19:24:47.807: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-8541\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-primary-qtnch\n"
Sep 28 19:24:47.808: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8541 describe service agnhost-primary'
Sep 28 19:24:48.124: INFO: stderr: ""
Sep 28 19:24:48.124: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-8541\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.71.102.102\nIPs:               100.71.102.102\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.1.39:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 28 19:24:48.169: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8541 describe node ip-172-20-36-158.ec2.internal'
Sep 28 19:24:48.617: INFO: stderr: ""
Sep 28 19:24:48.617: INFO: stdout: "Name:               ip-172-20-36-158.ec2.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east-1\n                    failure-domain.beta.kubernetes.io/zone=us-east-1a\n                    kops.k8s.io/instancegroup=nodes-us-east-1a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-36-158.ec2.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.kubernetes.io/region=us-east-1\n                    topology.kubernetes.io/zone=us-east-1a\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 28 Sep 2021 19:20:24 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-36-158.ec2.internal\n  AcquireTime:     <unset>\n  RenewTime:       Tue, 28 Sep 2021 19:24:39 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 28 Sep 2021 19:24:25 +0000   Tue, 28 Sep 2021 19:20:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 28 Sep 2021 19:24:25 +0000   Tue, 28 Sep 2021 19:20:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 28 Sep 2021 19:24:25 +0000   Tue, 28 Sep 2021 19:20:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 28 Sep 2021 19:24:25 +0000   Tue, 28 Sep 2021 19:20:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:   172.20.36.158\n  ExternalIP:   52.91.99.224\n  Hostname:     ip-172-20-36-158.ec2.internal\n  InternalDNS:  ip-172-20-36-158.ec2.internal\n  ExternalDNS:  ec2-52-91-99-224.compute-1.amazonaws.com\nCapacity:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           46343520Ki\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3966528Ki\n  pods:                        110\nAllocatable:\n  attachable-volumes-aws-ebs:  25\n  cpu:                         2\n  ephemeral-storage:           42710187962\n  hugepages-1Gi:               0\n  hugepages-2Mi:               0\n  memory:                      3864128Ki\n  pods:                        110\nSystem Info:\n  Machine ID:                 ec211721c19a6522a00296d52b725cbb\n  System UUID:                ec211721-c19a-6522-a002-96d52b725cbb\n  Boot ID:                    0f938b1e-a6e4-4703-823e-56774e82ed59\n  Kernel Version:             5.10.67-flatcar\n  OS Image:                   Flatcar Container Linux by Kinvolk 2905.2.4 (Oklo)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.5.4\n  Kubelet Version:            v1.21.5\n  Kube-Proxy Version:         v1.21.5\nPodCIDR:                      100.96.1.0/24\nPodCIDRs:                     100.96.1.0/24\nProviderID:                   aws:///us-east-1a/i-03f17841d09a5163a\nNon-terminated Pods:          (18 in total)\n  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---\n  conntrack-1416              pod-client                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s\n  container-probe-6925        startup-75572a24-0881-46e5-a391-bd0341db5b6a               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s\n  kube-system                 coredns-5dc785954d-ts5s8                                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m36s\n  kube-system                 coredns-autoscaler-84d4cfd89c-7ghkz                        20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         5m36s\n  kube-system                 kopeio-networking-agent-k77gc                              50m (2%)      0 (0%)      100Mi (2%)       100Mi (2%)     4m24s\n  kube-system                 kube-proxy-ip-172-20-36-158.ec2.internal                   100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m22s\n  kubectl-6245                agnhost-replica-6bcf79b489-wbg4d                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         33s\n  kubectl-6245                frontend-685fc574d5-qlj68                                  100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         34s\n  kubectl-8541                agnhost-primary-qtnch                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\n  kubelet-test-5866           busybox-readonly-fs47d4716c-908c-430b-b32a-e4bacaf3f412    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s\n  nettest-7029                host-test-container-pod                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\n  nettest-7029                netserver-0                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s\n  nettest-7029                test-container-pod                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\n  pod-network-test-8876       netserver-0                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s\n  pods-5830                   pod-submit-status-0-3                                      5m (0%)       0 (0%)      10Mi (0%)        0 (0%)         11s\n  prestop-5063                tester                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  webhook-2211                sample-webhook-deployment-78988fc6cd-bvvvz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s\n  webhook-3475                sample-webhook-deployment-78988fc6cd-sq6xr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                    Requests     Limits\n  --------                    --------     ------\n  cpu                         475m (23%)   0 (0%)\n  memory                      390Mi (10%)  270Mi (7%)\n  ephemeral-storage           0 (0%)       0 (0%)\n  hugepages-1Gi               0 (0%)       0 (0%)\n  hugepages-2Mi               0 (0%)       0 (0%)\n  attachable-volumes-aws-ebs  0            0\nEvents:\n  Type     Reason                   Age                    From        Message\n  ----     ------                   ----                   ----        -------\n  Normal   Starting                 4m24s                  kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      4m24s                  kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  4m24s (x4 over 4m24s)  kubelet     Node ip-172-20-36-158.ec2.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    4m24s (x4 over 4m24s)  kubelet     Node ip-172-20-36-158.ec2.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     4m24s (x4 over 4m24s)  kubelet     Node ip-172-20-36-158.ec2.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  4m24s                  kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                 4m23s                  kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                4m14s                  kubelet     Node ip-172-20-36-158.ec2.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:49.011: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 82 lines ...
• [SLOW TEST:53.364 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:54.593: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:56.810: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:24:56.928: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:16.856 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:05.896: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 150 lines ...
STEP: Registering the webhook via the AdmissionRegistration API
Sep 28 19:24:20.165: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:30.341: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:40.440: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:50.538: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:00.611: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:00.612: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000336240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 442 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:25:00.612: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000336240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:961
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":1,"skipped":14,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:06.729: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 28 19:24:26.989: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Listing all of the created validation webhooks
Sep 28 19:25:01.462: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.StatusError | 0xc000424320>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
... skipping 453 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:25:01.462: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.StatusError | 0xc000424320>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {
                  SelfLink: "",
                  ResourceVersion: "",
... skipping 9 lines ...
      }
      Timeout: request did not complete within requested timeout context deadline exceeded
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:680
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":6,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:07.416: INFO: Only supported for providers [gce gke] (not aws)
... skipping 89 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4602-crds.webhook.example.com via the AdmissionRegistration API
Sep 28 19:24:21.440: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:31.619: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:41.724: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:51.823: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:01.921: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:01.922: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 438 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:25:01.922: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":0,"skipped":11,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:07.539: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 48 lines ...
Sep 28 19:24:55.646: INFO: PersistentVolumeClaim pvc-54z9n found but phase is Pending instead of Bound.
Sep 28 19:24:57.684: INFO: PersistentVolumeClaim pvc-54z9n found and phase=Bound (6.148876524s)
Sep 28 19:24:57.684: INFO: Waiting up to 3m0s for PersistentVolume local-jwxzv to have phase Bound
Sep 28 19:24:57.720: INFO: PersistentVolume local-jwxzv found and phase=Bound (36.31377ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wccl
STEP: Creating a pod to test subpath
Sep 28 19:24:57.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wccl" in namespace "provisioning-9942" to be "Succeeded or Failed"
Sep 28 19:24:57.870: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl": Phase="Pending", Reason="", readiness=false. Elapsed: 37.685629ms
Sep 28 19:24:59.909: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07627958s
Sep 28 19:25:01.948: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11560756s
Sep 28 19:25:03.987: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153850249s
Sep 28 19:25:06.031: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.197924386s
STEP: Saw pod success
Sep 28 19:25:06.031: INFO: Pod "pod-subpath-test-preprovisionedpv-wccl" satisfied condition "Succeeded or Failed"
Sep 28 19:25:06.067: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-preprovisionedpv-wccl container test-container-subpath-preprovisionedpv-wccl: <nil>
STEP: delete the pod
Sep 28 19:25:06.147: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wccl to disappear
Sep 28 19:25:06.184: INFO: Pod pod-subpath-test-preprovisionedpv-wccl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wccl
Sep 28 19:25:06.185: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wccl" in namespace "provisioning-9942"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:07.769: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:25:08.005: INFO: Waiting up to 5m0s for pod "metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7" in namespace "downward-api-9948" to be "Succeeded or Failed"
Sep 28 19:25:08.042: INFO: Pod "metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.187968ms
Sep 28 19:25:10.079: INFO: Pod "metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074455281s
STEP: Saw pod success
Sep 28 19:25:10.079: INFO: Pod "metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7" satisfied condition "Succeeded or Failed"
Sep 28 19:25:10.115: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7 container client-container: <nil>
STEP: delete the pod
Sep 28 19:25:10.198: INFO: Waiting for pod metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7 to disappear
Sep 28 19:25:10.234: INFO: Pod metadata-volume-7457165b-a922-4fac-b681-9054d8b62cd7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:25:10.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9948" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
Sep 28 19:24:21.511: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b94nk] to have phase Bound
Sep 28 19:24:21.548: INFO: PersistentVolumeClaim pvc-b94nk found and phase=Bound (37.080576ms)
STEP: Deleting the previously created pod
Sep 28 19:24:39.737: INFO: Deleting pod "pvc-volume-tester-g7dpm" in namespace "csi-mock-volumes-811"
Sep 28 19:24:39.779: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g7dpm" to be fully deleted
STEP: Checking CSI driver logs
Sep 28 19:24:51.902: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/bb49bfac-f774-44e5-a202-93fad3e708f0/volumes/kubernetes.io~csi/pvc-32d92ea7-5f41-4933-ac68-d877103f3f7b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-g7dpm
Sep 28 19:24:51.903: INFO: Deleting pod "pvc-volume-tester-g7dpm" in namespace "csi-mock-volumes-811"
STEP: Deleting claim pvc-b94nk
Sep 28 19:24:52.016: INFO: Waiting up to 2m0s for PersistentVolume pvc-32d92ea7-5f41-4933-ac68-d877103f3f7b to get deleted
Sep 28 19:24:52.053: INFO: PersistentVolume pvc-32d92ea7-5f41-4933-ac68-d877103f3f7b was removed
STEP: Deleting storageclass csi-mock-volumes-811-scdgtkt
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Sep 28 19:24:24.448: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:34.629: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:44.730: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:24:54.829: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:04.909: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:04.909: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 426 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:25:04.909: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1361
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":0,"skipped":11,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:12.676: INFO: Only supported for providers [vsphere] (not aws)
... skipping 133 lines ...
Sep 28 19:24:38.914: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5jdxx] to have phase Bound
Sep 28 19:24:38.951: INFO: PersistentVolumeClaim pvc-5jdxx found and phase=Bound (36.079877ms)
STEP: Deleting the previously created pod
Sep 28 19:24:51.138: INFO: Deleting pod "pvc-volume-tester-592dj" in namespace "csi-mock-volumes-6545"
Sep 28 19:24:51.177: INFO: Wait up to 5m0s for pod "pvc-volume-tester-592dj" to be fully deleted
STEP: Checking CSI driver logs
Sep 28 19:24:57.293: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f75f81d7-fbe6-4548-b841-13c27b9267b4/volumes/kubernetes.io~csi/pvc-4ff3da12-6b78-49f7-9396-3b4d8102447e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-592dj
Sep 28 19:24:57.293: INFO: Deleting pod "pvc-volume-tester-592dj" in namespace "csi-mock-volumes-6545"
STEP: Deleting claim pvc-5jdxx
Sep 28 19:24:57.400: INFO: Waiting up to 2m0s for PersistentVolume pvc-4ff3da12-6b78-49f7-9396-3b4d8102447e to get deleted
Sep 28 19:24:57.441: INFO: PersistentVolume pvc-4ff3da12-6b78-49f7-9396-3b4d8102447e found and phase=Released (40.725734ms)
Sep 28 19:24:59.476: INFO: PersistentVolume pvc-4ff3da12-6b78-49f7-9396-3b4d8102447e was removed
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:12.969: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:25:07.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052" in namespace "projected-5088" to be "Succeeded or Failed"
Sep 28 19:25:07.366: INFO: Pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052": Phase="Pending", Reason="", readiness=false. Elapsed: 35.176084ms
Sep 28 19:25:09.402: INFO: Pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071115255s
Sep 28 19:25:11.438: INFO: Pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107093951s
Sep 28 19:25:13.475: INFO: Pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143736311s
STEP: Saw pod success
Sep 28 19:25:13.475: INFO: Pod "downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052" satisfied condition "Succeeded or Failed"
Sep 28 19:25:13.510: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052 container client-container: <nil>
STEP: delete the pod
Sep 28 19:25:13.587: INFO: Waiting for pod downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052 to disappear
Sep 28 19:25:13.622: INFO: Pod downwardapi-volume-dacc3027-3c9e-4f0c-b198-b580ce921052 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.579 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":36,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:13.704: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:12.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-0c6ec277-62fb-4255-aa42-40a0f284fce4
STEP: Creating a pod to test consume secrets
Sep 28 19:25:10.595: INFO: Waiting up to 5m0s for pod "pod-secrets-efb465ff-b17d-496c-ab55-234312798c42" in namespace "secrets-7457" to be "Succeeded or Failed"
Sep 28 19:25:10.631: INFO: Pod "pod-secrets-efb465ff-b17d-496c-ab55-234312798c42": Phase="Pending", Reason="", readiness=false. Elapsed: 36.272991ms
Sep 28 19:25:12.669: INFO: Pod "pod-secrets-efb465ff-b17d-496c-ab55-234312798c42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073819716s
Sep 28 19:25:14.706: INFO: Pod "pod-secrets-efb465ff-b17d-496c-ab55-234312798c42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110722239s
STEP: Saw pod success
Sep 28 19:25:14.706: INFO: Pod "pod-secrets-efb465ff-b17d-496c-ab55-234312798c42" satisfied condition "Succeeded or Failed"
Sep 28 19:25:14.742: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-secrets-efb465ff-b17d-496c-ab55-234312798c42 container secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:25:14.821: INFO: Waiting for pod pod-secrets-efb465ff-b17d-496c-ab55-234312798c42 to disappear
Sep 28 19:25:14.858: INFO: Pod pod-secrets-efb465ff-b17d-496c-ab55-234312798c42 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:25:14.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7457" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:14.943: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:13.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-a5624b35-bbbb-4e66-91a0-6186680c8fa3
STEP: Creating a pod to test consume configMaps
Sep 28 19:25:14.007: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a" in namespace "projected-6995" to be "Succeeded or Failed"
Sep 28 19:25:14.042: INFO: Pod "pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.234376ms
Sep 28 19:25:16.079: INFO: Pod "pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07242088s
STEP: Saw pod success
Sep 28 19:25:16.079: INFO: Pod "pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a" satisfied condition "Succeeded or Failed"
Sep 28 19:25:16.115: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:25:16.193: INFO: Waiting for pod pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a to disappear
Sep 28 19:25:16.228: INFO: Pod pod-projected-configmaps-f94cb72b-5f84-4692-9c1d-57131ce9bb2a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:25:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6995" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:06.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:18.298: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:8.584 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:21.620: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":36,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:16.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":7,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:28.575: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
• [SLOW TEST:37.817 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:454
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-wrq8
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:25:07.627: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wrq8" in namespace "subpath-1137" to be "Succeeded or Failed"
Sep 28 19:25:07.663: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.640526ms
Sep 28 19:25:09.705: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077643219s
Sep 28 19:25:11.742: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115312951s
Sep 28 19:25:13.780: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153155148s
Sep 28 19:25:15.817: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 8.19011239s
Sep 28 19:25:17.859: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 10.232022568s
... skipping 5 lines ...
Sep 28 19:25:30.082: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 22.455317306s
Sep 28 19:25:32.121: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 24.49349082s
Sep 28 19:25:34.157: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 26.530151403s
Sep 28 19:25:36.194: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Running", Reason="", readiness=true. Elapsed: 28.567206329s
Sep 28 19:25:38.231: INFO: Pod "pod-subpath-test-downwardapi-wrq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.604073623s
STEP: Saw pod success
Sep 28 19:25:38.231: INFO: Pod "pod-subpath-test-downwardapi-wrq8" satisfied condition "Succeeded or Failed"
Sep 28 19:25:38.267: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-downwardapi-wrq8 container test-container-subpath-downwardapi-wrq8: <nil>
STEP: delete the pod
Sep 28 19:25:38.344: INFO: Waiting for pod pod-subpath-test-downwardapi-wrq8 to disappear
Sep 28 19:25:38.379: INFO: Pod pod-subpath-test-downwardapi-wrq8 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-wrq8
Sep 28 19:25:38.379: INFO: Deleting pod "pod-subpath-test-downwardapi-wrq8" in namespace "subpath-1137"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:38.504: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 42 lines ...
• [SLOW TEST:77.885 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":3,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:39.559: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:27.595 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":3,"skipped":41,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
STEP: Registering the webhook via the AdmissionRegistration API
Sep 28 19:24:59.298: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:09.479: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:19.591: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:29.723: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:39.823: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:39.823: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b4240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 586 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:25:39.824: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b4240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:909
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":1,"skipped":31,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:46.801: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:60.338 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 28 19:25:38.740: INFO: Waiting up to 5m0s for pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2" in namespace "emptydir-7013" to be "Succeeded or Failed"
Sep 28 19:25:38.776: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.670715ms
Sep 28 19:25:40.812: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071853791s
Sep 28 19:25:42.849: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109266981s
Sep 28 19:25:44.886: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14627615s
Sep 28 19:25:46.923: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.183225948s
STEP: Saw pod success
Sep 28 19:25:46.923: INFO: Pod "pod-effeafba-b01a-4db0-ab82-3657bbae39e2" satisfied condition "Succeeded or Failed"
Sep 28 19:25:46.959: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-effeafba-b01a-4db0-ab82-3657bbae39e2 container test-container: <nil>
STEP: delete the pod
Sep 28 19:25:47.035: INFO: Waiting for pod pod-effeafba-b01a-4db0-ab82-3657bbae39e2 to disappear
Sep 28 19:25:47.071: INFO: Pod pod-effeafba-b01a-4db0-ab82-3657bbae39e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":13,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
Sep 28 19:25:41.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 28 19:25:41.492: INFO: Waiting up to 5m0s for pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7" in namespace "containers-6706" to be "Succeeded or Failed"
Sep 28 19:25:41.528: INFO: Pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.701615ms
Sep 28 19:25:43.565: INFO: Pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073773842s
Sep 28 19:25:45.603: INFO: Pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111482161s
Sep 28 19:25:47.640: INFO: Pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14815602s
STEP: Saw pod success
Sep 28 19:25:47.640: INFO: Pod "client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7" satisfied condition "Succeeded or Failed"
Sep 28 19:25:47.676: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7 container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:25:47.758: INFO: Waiting for pod client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7 to disappear
Sep 28 19:25:47.794: INFO: Pod client-containers-b7cd65ef-34c4-4922-b666-c9ad24e51bf7 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 124 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:8.478 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":5,"skipped":21,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 111 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:50.718 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:25:58.195: INFO: Only supported for providers [vsphere] (not aws)
... skipping 61 lines ...
Sep 28 19:24:45.803: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7834
Sep 28 19:24:45.839: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7834
Sep 28 19:24:45.875: INFO: creating *v1.StatefulSet: csi-mock-volumes-7834-6978/csi-mockplugin
Sep 28 19:24:45.912: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7834
Sep 28 19:24:45.952: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7834"
Sep 28 19:24:45.987: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7834 to register on node ip-172-20-61-119.ec2.internal
I0928 19:24:48.343324    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0928 19:24:48.378455    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7834","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0928 19:24:48.413643    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0928 19:24:48.449652    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0928 19:24:48.558239    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7834","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0928 19:24:49.413864    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7834","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Sep 28 19:24:51.177: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0928 19:24:51.266101    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0928 19:24:53.445780    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0928 19:24:54.648333    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 28 19:24:54.687: INFO: >>> kubeConfig: /root/.kube/config
I0928 19:24:54.993108    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e","storage.kubernetes.io/csiProvisionerIdentity":"1632857088475-8081-csi-mock-csi-mock-volumes-7834"}},"Response":{},"Error":"","FullError":null}
I0928 19:24:55.034490    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep 28 19:24:55.073: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:55.454: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:55.747: INFO: >>> kubeConfig: /root/.kube/config
I0928 19:24:56.062310    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e/globalmount","target_path":"/var/lib/kubelet/pods/caa15657-616e-4301-bca0-6ae8b2ca3dcc/volumes/kubernetes.io~csi/pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e","storage.kubernetes.io/csiProvisionerIdentity":"1632857088475-8081-csi-mock-csi-mock-volumes-7834"}},"Response":{},"Error":"","FullError":null}
Sep 28 19:24:57.325: INFO: Deleting pod "pvc-volume-tester-6ddt2" in namespace "csi-mock-volumes-7834"
Sep 28 19:24:57.363: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6ddt2" to be fully deleted
Sep 28 19:25:00.178: INFO: >>> kubeConfig: /root/.kube/config
I0928 19:25:00.568253    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/caa15657-616e-4301-bca0-6ae8b2ca3dcc/volumes/kubernetes.io~csi/pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e/mount"},"Response":{},"Error":"","FullError":null}
I0928 19:25:00.646788    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0928 19:25:00.688953    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1591cb22-1ac1-431a-bb6d-a3a63444935e/globalmount"},"Response":{},"Error":"","FullError":null}
I0928 19:25:03.494157    5396 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Sep 28 19:25:04.476: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zskkz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7834", SelfLink:"", UID:"1591cb22-1ac1-431a-bb6d-a3a63444935e", ResourceVersion:"3453", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768453891, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029b2870), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029b2888)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0029c0770), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029c0780), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 28 19:25:04.477: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zskkz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7834", SelfLink:"", UID:"1591cb22-1ac1-431a-bb6d-a3a63444935e", ResourceVersion:"3456", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768453891, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-61-119.ec2.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029b2ab0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029b2ac8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029b2af8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029b2b10)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0029c08d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029c08e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 28 19:25:04.477: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zskkz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7834", SelfLink:"", UID:"1591cb22-1ac1-431a-bb6d-a3a63444935e", ResourceVersion:"3457", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768453891, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7834", "volume.kubernetes.io/selected-node":"ip-172-20-61-119.ec2.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adaaf8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adab10)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adab28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adab40)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adab58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adab70)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001275510), VolumeMode:(*v1.PersistentVolumeMode)(0xc001275520), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 28 19:25:04.477: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zskkz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7834", SelfLink:"", UID:"1591cb22-1ac1-431a-bb6d-a3a63444935e", ResourceVersion:"3462", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768453891, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7834"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adab88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adaba0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adabb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adabd0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adabe8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adac00)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001275570), VolumeMode:(*v1.PersistentVolumeMode)(0xc001275580), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep 28 19:25:04.477: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zskkz", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7834", SelfLink:"", UID:"1591cb22-1ac1-431a-bb6d-a3a63444935e", ResourceVersion:"3515", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63768453891, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7834", "volume.kubernetes.io/selected-node":"ip-172-20-61-119.ec2.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adac30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adac48)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adac60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adac78)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001adac90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001adacc0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0012755b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0012755d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":4,"skipped":44,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:03.361: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Sep 28 19:25:55.431: INFO: PersistentVolumeClaim pvc-k8pnk found but phase is Pending instead of Bound.
Sep 28 19:25:57.470: INFO: PersistentVolumeClaim pvc-k8pnk found and phase=Bound (12.273678202s)
Sep 28 19:25:57.470: INFO: Waiting up to 3m0s for PersistentVolume local-77c6d to have phase Bound
Sep 28 19:25:57.509: INFO: PersistentVolume local-77c6d found and phase=Bound (38.161694ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vdzz
STEP: Creating a pod to test subpath
Sep 28 19:25:57.625: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vdzz" in namespace "provisioning-587" to be "Succeeded or Failed"
Sep 28 19:25:57.664: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz": Phase="Pending", Reason="", readiness=false. Elapsed: 38.976528ms
Sep 28 19:25:59.705: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079352224s
Sep 28 19:26:01.744: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118689987s
Sep 28 19:26:03.783: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157443829s
Sep 28 19:26:05.826: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200555219s
STEP: Saw pod success
Sep 28 19:26:05.826: INFO: Pod "pod-subpath-test-preprovisionedpv-vdzz" satisfied condition "Succeeded or Failed"
Sep 28 19:26:05.867: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-vdzz container test-container-volume-preprovisionedpv-vdzz: <nil>
STEP: delete the pod
Sep 28 19:26:05.951: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vdzz to disappear
Sep 28 19:26:05.992: INFO: Pod pod-subpath-test-preprovisionedpv-vdzz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vdzz
Sep 28 19:26:05.992: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vdzz" in namespace "provisioning-587"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:06.634: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 143 lines ...
• [SLOW TEST:9.438 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
STEP: Registering the crd webhook via the AdmissionRegistration API
Sep 28 19:25:30.257: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:40.436: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:25:50.536: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:26:00.636: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:26:10.714: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:26:10.714: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 490 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:26:10.714: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":1,"skipped":6,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:15.754: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":6,"skipped":24,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:10.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 118 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Sep 28 19:26:15.992: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-5970b09b-2ebc-4277-b825-10fdfbced95d" in namespace "security-context-test-5168" to be "Succeeded or Failed"
Sep 28 19:26:16.030: INFO: Pod "busybox-readonly-true-5970b09b-2ebc-4277-b825-10fdfbced95d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.839976ms
Sep 28 19:26:18.069: INFO: Pod "busybox-readonly-true-5970b09b-2ebc-4277-b825-10fdfbced95d": Phase="Failed", Reason="", readiness=false. Elapsed: 2.076632271s
Sep 28 19:26:18.069: INFO: Pod "busybox-readonly-true-5970b09b-2ebc-4277-b825-10fdfbced95d" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:18.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5168" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":7,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:18.163: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Sep 28 19:26:11.058: INFO: PersistentVolumeClaim pvc-ws6sj found but phase is Pending instead of Bound.
Sep 28 19:26:13.096: INFO: PersistentVolumeClaim pvc-ws6sj found and phase=Bound (6.155533183s)
Sep 28 19:26:13.096: INFO: Waiting up to 3m0s for PersistentVolume local-j2gpj to have phase Bound
Sep 28 19:26:13.135: INFO: PersistentVolume local-j2gpj found and phase=Bound (38.113026ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xckg
STEP: Creating a pod to test subpath
Sep 28 19:26:13.498: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xckg" in namespace "provisioning-9994" to be "Succeeded or Failed"
Sep 28 19:26:13.536: INFO: Pod "pod-subpath-test-preprovisionedpv-xckg": Phase="Pending", Reason="", readiness=false. Elapsed: 38.61495ms
Sep 28 19:26:15.576: INFO: Pod "pod-subpath-test-preprovisionedpv-xckg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078027843s
Sep 28 19:26:17.615: INFO: Pod "pod-subpath-test-preprovisionedpv-xckg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11773132s
STEP: Saw pod success
Sep 28 19:26:17.616: INFO: Pod "pod-subpath-test-preprovisionedpv-xckg" satisfied condition "Succeeded or Failed"
Sep 28 19:26:17.654: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-preprovisionedpv-xckg container test-container-subpath-preprovisionedpv-xckg: <nil>
STEP: delete the pod
Sep 28 19:26:17.740: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xckg to disappear
Sep 28 19:26:17.778: INFO: Pod pod-subpath-test-preprovisionedpv-xckg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xckg
Sep 28 19:26:17.778: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xckg" in namespace "provisioning-9994"
... skipping 158 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":8,"skipped":43,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:24:08.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container Status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200
    should never report success for a pending container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:21.281: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":2,"skipped":37,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:47.339: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Sep 28 19:26:10.168: INFO: PersistentVolumeClaim pvc-qk7qq found but phase is Pending instead of Bound.
Sep 28 19:26:12.208: INFO: PersistentVolumeClaim pvc-qk7qq found and phase=Bound (4.117526193s)
Sep 28 19:26:12.208: INFO: Waiting up to 3m0s for PersistentVolume local-zjzrp to have phase Bound
Sep 28 19:26:12.246: INFO: PersistentVolume local-zjzrp found and phase=Bound (38.025251ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vdbs
STEP: Creating a pod to test subpath
Sep 28 19:26:12.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vdbs" in namespace "provisioning-5014" to be "Succeeded or Failed"
Sep 28 19:26:12.403: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs": Phase="Pending", Reason="", readiness=false. Elapsed: 41.713235ms
Sep 28 19:26:14.443: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081082585s
Sep 28 19:26:16.482: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120156814s
Sep 28 19:26:18.520: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158894786s
Sep 28 19:26:20.560: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198364583s
STEP: Saw pod success
Sep 28 19:26:20.560: INFO: Pod "pod-subpath-test-preprovisionedpv-vdbs" satisfied condition "Succeeded or Failed"
Sep 28 19:26:20.599: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-vdbs container test-container-volume-preprovisionedpv-vdbs: <nil>
STEP: delete the pod
Sep 28 19:26:20.684: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vdbs to disappear
Sep 28 19:26:20.733: INFO: Pod pod-subpath-test-preprovisionedpv-vdbs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vdbs
Sep 28 19:26:20.733: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vdbs" in namespace "provisioning-5014"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":37,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:21.308: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-5063
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5063
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Sep 28 19:26:19.515: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc0002c6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 21 lines ...
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:25 +0000 UTC - event for server: {kubelet ip-172-20-50-189.ec2.internal} Started: Started container agnhost-container
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:31 +0000 UTC - event for tester: {default-scheduler } Scheduled: Successfully assigned prestop-5063/tester to ip-172-20-36-158.ec2.internal
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:32 +0000 UTC - event for tester: {kubelet ip-172-20-36-158.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:32 +0000 UTC - event for tester: {kubelet ip-172-20-36-158.ec2.internal} Created: Created container tester
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:32 +0000 UTC - event for tester: {kubelet ip-172-20-36-158.ec2.internal} Started: Started container tester
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:24:39 +0000 UTC - event for tester: {kubelet ip-172-20-36-158.ec2.internal} Killing: Stopping container tester
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:25:11 +0000 UTC - event for tester: {kubelet ip-172-20-36-158.ec2.internal} FailedPreStopHook: Exec lifecycle hook ([wget -O- --post-data={"Source": "prestop"} http://100.96.4.22:8080/write]) for Container "tester" in Pod "tester_prestop-5063(e42d96ed-e6de-421a-ac4d-a49780dffdd3)" failed - error: command 'wget -O- --post-data={"Source": "prestop"} http://100.96.4.22:8080/write' exited with 137: Connecting to 100.96.4.22:8080 (100.96.4.22:8080)
, message: "Connecting to 100.96.4.22:8080 (100.96.4.22:8080)\n"
Sep 28 19:26:19.600: INFO: At 2021-09-28 19:26:19 +0000 UTC - event for server: {kubelet ip-172-20-50-189.ec2.internal} Killing: Stopping container agnhost-container
Sep 28 19:26:19.638: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 28 19:26:19.638: INFO: 
Sep 28 19:26:19.677: INFO: 
Logging node info for node ip-172-20-36-158.ec2.internal
... skipping 210 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:26:19.515: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc0002c6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":1,"skipped":15,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:23.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7693" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:24.417: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:9.704 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":32,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:27.630: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
Sep 28 19:25:46.592: INFO: PersistentVolumeClaim pvc-cqxqn found and phase=Bound (66.293159ms)
STEP: Deleting the previously created pod
Sep 28 19:26:13.798: INFO: Deleting pod "pvc-volume-tester-zpppn" in namespace "csi-mock-volumes-1950"
Sep 28 19:26:13.842: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zpppn" to be fully deleted
STEP: Checking CSI driver logs
Sep 28 19:26:19.962: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IllWSE5uRlVvY1p5US1vNzRVOW96Mm1QZWNxMTF0LXl6REpDQmFacjhfRTQifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MzI4NTc3NTksImlhdCI6MTYzMjg1NzE1OSwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWIwOGU1MzQzMTgtNjI2OTEudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtMTk1MCIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXItenBwcG4iLCJ1aWQiOiIxN2JmMmRmYi01NzUzLTQ0YjUtOTc5NS1jODk4MGQ0ZTZjMzkifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiIwODkyM2JhNy00YmUxLTRkOTctOTBkMy0yNDFjZTZiNjUyNzUifX0sIm5iZiI6MTYzMjg1NzE1OSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtMTk1MDpkZWZhdWx0In0.C6qzZOYpXdPHjDs9JZ4Xj8qBfEL4PmLB23YiWRNhUYqeeOolNiJrQz6a27cZpee39dl2c9h_z97Gh-8rTWh12GBCMCY06kxshkoU_rPmlx-8NCsGm1NX9stvQqV9W-ZU4qtYlxK9cMpcy4MUSAgrRXQVI4WMwCw_B6q1f-9N7NU7OznaW8cpqpMk-LoGRZtZSCyJMtEWt-vDy6eX_Y17EcA6_m5Jv-LRXQGgT0vLazJPfNMtxLzVdmqa_3RImG-drlJQTQn1DWeQBfseJpzDw9joCSHb7hUEEvvPA42dPiMf_PDKuvH6RDXx8Xa5ld1yaPH96CClSl5yw9jFKZ72ZQ","expirationTimestamp":"2021-09-28T19:35:59Z"}}
Sep 28 19:26:19.962: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/17bf2dfb-5753-44b5-9795-c8980d4e6c39/volumes/kubernetes.io~csi/pvc-d8824bd1-0f4f-4746-9648-baadcb9027d6/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-zpppn
Sep 28 19:26:19.962: INFO: Deleting pod "pvc-volume-tester-zpppn" in namespace "csi-mock-volumes-1950"
STEP: Deleting claim pvc-cqxqn
Sep 28 19:26:20.077: INFO: Waiting up to 2m0s for PersistentVolume pvc-d8824bd1-0f4f-4746-9648-baadcb9027d6 to get deleted
Sep 28 19:26:20.117: INFO: PersistentVolume pvc-d8824bd1-0f4f-4746-9648-baadcb9027d6 found and phase=Released (39.827638ms)
Sep 28 19:26:22.157: INFO: PersistentVolume pvc-d8824bd1-0f4f-4746-9648-baadcb9027d6 found and phase=Released (2.079670461s)
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":40,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:44.892: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 104 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":40,"failed":0}
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:18.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
• [SLOW TEST:27.200 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":6,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:46.033: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
Sep 28 19:26:22.449: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Sep 28 19:26:23.005: INFO: Successfully created a new PD: "aws://us-east-1a/vol-011d05b652a08dd4e".
Sep 28 19:26:23.006: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-brjv
STEP: Creating a pod to test exec-volume-test
Sep 28 19:26:23.045: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-brjv" in namespace "volume-4693" to be "Succeeded or Failed"
Sep 28 19:26:23.083: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 37.474714ms
Sep 28 19:26:25.122: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07689625s
Sep 28 19:26:27.162: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116070164s
Sep 28 19:26:29.200: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15477113s
Sep 28 19:26:31.239: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193533964s
Sep 28 19:26:33.279: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.233138005s
Sep 28 19:26:35.317: INFO: Pod "exec-volume-test-inlinevolume-brjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.271784721s
STEP: Saw pod success
Sep 28 19:26:35.317: INFO: Pod "exec-volume-test-inlinevolume-brjv" satisfied condition "Succeeded or Failed"
Sep 28 19:26:35.355: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod exec-volume-test-inlinevolume-brjv container exec-container-inlinevolume-brjv: <nil>
STEP: delete the pod
Sep 28 19:26:35.439: INFO: Waiting for pod exec-volume-test-inlinevolume-brjv to disappear
Sep 28 19:26:35.476: INFO: Pod exec-volume-test-inlinevolume-brjv no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-brjv
Sep 28 19:26:35.476: INFO: Deleting pod "exec-volume-test-inlinevolume-brjv" in namespace "volume-4693"
Sep 28 19:26:35.719: INFO: Couldn't delete PD "aws://us-east-1a/vol-011d05b652a08dd4e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-011d05b652a08dd4e is currently attached to i-075af63ba339a12f0
	status code: 400, request id: ccbcd6c6-0ce7-4d11-b016-63e2ca28c497
Sep 28 19:26:41.372: INFO: Couldn't delete PD "aws://us-east-1a/vol-011d05b652a08dd4e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-011d05b652a08dd4e is currently attached to i-075af63ba339a12f0
	status code: 400, request id: 29b8533b-2439-4d82-b44c-1ecdb7d5f6a5
Sep 28 19:26:46.664: INFO: Successfully deleted PD "aws://us-east-1a/vol-011d05b652a08dd4e".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:46.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4693" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":17,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:46.790: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:26:46.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d" in namespace "projected-5436" to be "Succeeded or Failed"
Sep 28 19:26:46.321: INFO: Pod "downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.204976ms
Sep 28 19:26:48.360: INFO: Pod "downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077737923s
STEP: Saw pod success
Sep 28 19:26:48.360: INFO: Pod "downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d" satisfied condition "Succeeded or Failed"
Sep 28 19:26:48.399: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d container client-container: <nil>
STEP: delete the pod
Sep 28 19:26:48.480: INFO: Waiting for pod downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d to disappear
Sep 28 19:26:48.518: INFO: Pod downwardapi-volume-19dfa7f9-202f-482a-a658-dd87b2d0ac7d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:48.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5436" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":1,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:09.712: INFO: >>> kubeConfig: /root/.kube/config
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:48.808: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 113 lines ...
Sep 28 19:26:46.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 28 19:26:47.056: INFO: Waiting up to 5m0s for pod "client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a" in namespace "containers-4180" to be "Succeeded or Failed"
Sep 28 19:26:47.095: INFO: Pod "client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.33888ms
Sep 28 19:26:49.133: INFO: Pod "client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077815587s
STEP: Saw pod success
Sep 28 19:26:49.133: INFO: Pod "client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a" satisfied condition "Succeeded or Failed"
Sep 28 19:26:49.172: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:26:49.252: INFO: Waiting for pod client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a to disappear
Sep 28 19:26:49.290: INFO: Pod client-containers-023ef860-0f9e-4360-ad3c-bb7087fd0a6a no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:49.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4180" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:49.387: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 110 lines ...
Sep 28 19:26:40.587: INFO: PersistentVolumeClaim pvc-km9xm found but phase is Pending instead of Bound.
Sep 28 19:26:42.623: INFO: PersistentVolumeClaim pvc-km9xm found and phase=Bound (12.26425003s)
Sep 28 19:26:42.623: INFO: Waiting up to 3m0s for PersistentVolume local-gbdk4 to have phase Bound
Sep 28 19:26:42.658: INFO: PersistentVolume local-gbdk4 found and phase=Bound (35.101404ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vn7j
STEP: Creating a pod to test subpath
Sep 28 19:26:42.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vn7j" in namespace "provisioning-8383" to be "Succeeded or Failed"
Sep 28 19:26:42.803: INFO: Pod "pod-subpath-test-preprovisionedpv-vn7j": Phase="Pending", Reason="", readiness=false. Elapsed: 35.923286ms
Sep 28 19:26:44.840: INFO: Pod "pod-subpath-test-preprovisionedpv-vn7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073021975s
Sep 28 19:26:46.880: INFO: Pod "pod-subpath-test-preprovisionedpv-vn7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113544833s
Sep 28 19:26:48.917: INFO: Pod "pod-subpath-test-preprovisionedpv-vn7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149725923s
STEP: Saw pod success
Sep 28 19:26:48.917: INFO: Pod "pod-subpath-test-preprovisionedpv-vn7j" satisfied condition "Succeeded or Failed"
Sep 28 19:26:48.952: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-vn7j container test-container-volume-preprovisionedpv-vn7j: <nil>
STEP: delete the pod
Sep 28 19:26:49.031: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vn7j to disappear
Sep 28 19:26:49.067: INFO: Pod pod-subpath-test-preprovisionedpv-vn7j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vn7j
Sep 28 19:26:49.067: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vn7j" in namespace "provisioning-8383"
... skipping 30 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:26:49.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b" in namespace "downward-api-7082" to be "Succeeded or Failed"
Sep 28 19:26:49.729: INFO: Pod "downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.547775ms
Sep 28 19:26:51.768: INFO: Pod "downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076402768s
STEP: Saw pod success
Sep 28 19:26:51.768: INFO: Pod "downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b" satisfied condition "Succeeded or Failed"
Sep 28 19:26:51.807: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b container client-container: <nil>
STEP: delete the pod
Sep 28 19:26:51.895: INFO: Waiting for pod downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b to disappear
Sep 28 19:26:51.932: INFO: Pod downwardapi-volume-678f5cf9-02e3-480f-acf5-17168e4f753b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7082" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":40,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:49.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 28 19:26:49.868: INFO: Waiting up to 5m0s for pod "pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b" in namespace "emptydir-896" to be "Succeeded or Failed"
Sep 28 19:26:49.904: INFO: Pod "pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.341221ms
Sep 28 19:26:51.940: INFO: Pod "pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072193355s
STEP: Saw pod success
Sep 28 19:26:51.941: INFO: Pod "pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b" satisfied condition "Succeeded or Failed"
Sep 28 19:26:51.976: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b container test-container: <nil>
STEP: delete the pod
Sep 28 19:26:52.051: INFO: Waiting for pod pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b to disappear
Sep 28 19:26:52.087: INFO: Pod pod-3f14062a-81f7-4b93-b12f-22020f9d4e2b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-896" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":40,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:41.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Sep 28 19:26:41.979: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:26:42.059: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6666" in namespace "provisioning-6666" to be "Succeeded or Failed"
Sep 28 19:26:42.097: INFO: Pod "hostpath-symlink-prep-provisioning-6666": Phase="Pending", Reason="", readiness=false. Elapsed: 38.136778ms
Sep 28 19:26:44.136: INFO: Pod "hostpath-symlink-prep-provisioning-6666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07756647s
STEP: Saw pod success
Sep 28 19:26:44.136: INFO: Pod "hostpath-symlink-prep-provisioning-6666" satisfied condition "Succeeded or Failed"
Sep 28 19:26:44.136: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6666" in namespace "provisioning-6666"
Sep 28 19:26:44.180: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6666" to be fully deleted
Sep 28 19:26:44.218: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hnqz
STEP: Creating a pod to test subpath
Sep 28 19:26:44.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hnqz" in namespace "provisioning-6666" to be "Succeeded or Failed"
Sep 28 19:26:44.296: INFO: Pod "pod-subpath-test-inlinevolume-hnqz": Phase="Pending", Reason="", readiness=false. Elapsed: 38.355731ms
Sep 28 19:26:46.336: INFO: Pod "pod-subpath-test-inlinevolume-hnqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078912392s
Sep 28 19:26:48.375: INFO: Pod "pod-subpath-test-inlinevolume-hnqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117724325s
Sep 28 19:26:50.414: INFO: Pod "pod-subpath-test-inlinevolume-hnqz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156570005s
Sep 28 19:26:52.456: INFO: Pod "pod-subpath-test-inlinevolume-hnqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198864858s
STEP: Saw pod success
Sep 28 19:26:52.457: INFO: Pod "pod-subpath-test-inlinevolume-hnqz" satisfied condition "Succeeded or Failed"
Sep 28 19:26:52.496: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-inlinevolume-hnqz container test-container-subpath-inlinevolume-hnqz: <nil>
STEP: delete the pod
Sep 28 19:26:52.581: INFO: Waiting for pod pod-subpath-test-inlinevolume-hnqz to disappear
Sep 28 19:26:52.619: INFO: Pod pod-subpath-test-inlinevolume-hnqz no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hnqz
Sep 28 19:26:52.620: INFO: Deleting pod "pod-subpath-test-inlinevolume-hnqz" in namespace "provisioning-6666"
STEP: Deleting pod
Sep 28 19:26:52.658: INFO: Deleting pod "pod-subpath-test-inlinevolume-hnqz" in namespace "provisioning-6666"
Sep 28 19:26:52.734: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6666" in namespace "provisioning-6666" to be "Succeeded or Failed"
Sep 28 19:26:52.773: INFO: Pod "hostpath-symlink-prep-provisioning-6666": Phase="Pending", Reason="", readiness=false. Elapsed: 38.232467ms
Sep 28 19:26:54.811: INFO: Pod "hostpath-symlink-prep-provisioning-6666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076849031s
Sep 28 19:26:56.851: INFO: Pod "hostpath-symlink-prep-provisioning-6666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116744425s
STEP: Saw pod success
Sep 28 19:26:56.851: INFO: Pod "hostpath-symlink-prep-provisioning-6666" satisfied condition "Succeeded or Failed"
Sep 28 19:26:56.851: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6666" in namespace "provisioning-6666"
Sep 28 19:26:56.896: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6666" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:56.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6666" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:57.038: INFO: Only supported for providers [gce gke] (not aws)
... skipping 187 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:56.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Sep 28 19:26:57.214: INFO: Waiting up to 5m0s for pod "pod-f1fd3def-a41f-4901-b8ac-2a163b470b74" in namespace "emptydir-3912" to be "Succeeded or Failed"
Sep 28 19:26:57.252: INFO: Pod "pod-f1fd3def-a41f-4901-b8ac-2a163b470b74": Phase="Pending", Reason="", readiness=false. Elapsed: 38.634164ms
Sep 28 19:26:59.291: INFO: Pod "pod-f1fd3def-a41f-4901-b8ac-2a163b470b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077488323s
STEP: Saw pod success
Sep 28 19:26:59.291: INFO: Pod "pod-f1fd3def-a41f-4901-b8ac-2a163b470b74" satisfied condition "Succeeded or Failed"
Sep 28 19:26:59.330: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-f1fd3def-a41f-4901-b8ac-2a163b470b74 container test-container: <nil>
STEP: delete the pod
Sep 28 19:26:59.415: INFO: Waiting for pod pod-f1fd3def-a41f-4901-b8ac-2a163b470b74 to disappear
Sep 28 19:26:59.454: INFO: Pod pod-f1fd3def-a41f-4901-b8ac-2a163b470b74 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:26:59.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3912" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":5,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:26:59.549: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 88 lines ...
• [SLOW TEST:53.513 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:00.244: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:00.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5853" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:00.442: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:48.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":43,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:52.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":7,"skipped":24,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:26:59.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:04.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5088" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":8,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:04.211: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 17 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:02.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:10.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9100" for this suite.


• [SLOW TEST:8.350 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":9,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":6,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:12.591: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 152 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":3,"skipped":9,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:13.472: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 59 lines ...
I0928 19:24:07.607693    5421 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0928 19:24:10.607885    5421 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Sep 28 19:24:10.794: INFO: Creating new exec pod
Sep 28 19:24:16.908: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:24:32.499: INFO: rc: 1
Sep 28 19:24:32.499: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:24:34.501: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:24:50.057: INFO: rc: 1
Sep 28 19:24:50.057: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:24:50.499: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:25:06.039: INFO: rc: 1
Sep 28 19:25:06.039: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:25:06.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:25:22.047: INFO: rc: 1
Sep 28 19:25:22.047: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:25:22.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:25:38.065: INFO: rc: 1
Sep 28 19:25:38.066: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:25:38.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:25:54.041: INFO: rc: 1
Sep 28 19:25:54.042: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:25:54.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:26:10.022: INFO: rc: 1
Sep 28 19:26:10.022: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:26:10.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:26:26.112: INFO: rc: 1
Sep 28 19:26:26.112: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:26:26.500: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:26:42.048: INFO: rc: 1
Sep 28 19:26:42.048: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:26:42.048: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6534 exec execpodrfnxd -- /bin/sh -x -c nslookup nodeport-service.services-6534.svc.cluster.local'
Sep 28 19:26:57.567: INFO: rc: 1
Sep 28 19:26:57.567: INFO: ExternalName service "services-6534/execpodrfnxd" failed to resolve to IP
Sep 28 19:26:57.567: FAIL: Unexpected error:
    <*errors.errorString | 0xc00033e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 29 lines ...
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:05 +0000 UTC - event for externalsvc-fq7gd: {kubelet ip-172-20-62-211.ec2.internal} Created: Created container externalsvc
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:05 +0000 UTC - event for externalsvc-fq7gd: {kubelet ip-172-20-62-211.ec2.internal} Started: Started container externalsvc
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:10 +0000 UTC - event for execpodrfnxd: {default-scheduler } Scheduled: Successfully assigned services-6534/execpodrfnxd to ip-172-20-61-119.ec2.internal
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:11 +0000 UTC - event for execpodrfnxd: {kubelet ip-172-20-61-119.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:12 +0000 UTC - event for execpodrfnxd: {kubelet ip-172-20-61-119.ec2.internal} Started: Started container agnhost-container
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:24:12 +0000 UTC - event for execpodrfnxd: {kubelet ip-172-20-61-119.ec2.internal} Created: Created container agnhost-container
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:26:57 +0000 UTC - event for externalsvc: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-6534/externalsvc: Operation cannot be fulfilled on endpoints "externalsvc": the object has been modified; please apply your changes to the latest version and try again
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:26:57 +0000 UTC - event for externalsvc-7tt7k: {kubelet ip-172-20-50-189.ec2.internal} Killing: Stopping container externalsvc
Sep 28 19:27:13.239: INFO: At 2021-09-28 19:26:57 +0000 UTC - event for externalsvc-fq7gd: {kubelet ip-172-20-62-211.ec2.internal} Killing: Stopping container externalsvc
Sep 28 19:27:13.276: INFO: POD           NODE                           PHASE    GRACE  CONDITIONS
Sep 28 19:27:13.276: INFO: execpodrfnxd  ip-172-20-61-119.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:24:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:24:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:24:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:24:10 +0000 UTC  }]
Sep 28 19:27:13.276: INFO: 
Sep 28 19:27:13.315: INFO: 
... skipping 88 lines ...
Sep 28 19:27:14.191: INFO: 	Container liveness-probe ready: true, restart count 0
Sep 28 19:27:14.191: INFO: 	Container node-driver-registrar ready: true, restart count 0
Sep 28 19:27:14.191: INFO: update-demo-nautilus-kpfkp started at 2021-09-28 19:25:48 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container update-demo ready: true, restart count 0
Sep 28 19:27:14.191: INFO: pod-9c36fc86-5d11-4238-80f0-a8aa36c4af5b started at 2021-09-28 19:25:53 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container write-pod ready: true, restart count 0
Sep 28 19:27:14.191: INFO: fail-once-local-75ppb started at 2021-09-28 19:27:08 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container c ready: false, restart count 1
Sep 28 19:27:14.191: INFO: csi-hostpath-provisioner-0 started at 2021-09-28 19:25:42 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container csi-provisioner ready: true, restart count 0
Sep 28 19:27:14.191: INFO: affinity-clusterip-transition-jkcwn started at 2021-09-28 19:25:47 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container affinity-clusterip-transition ready: true, restart count 0
Sep 28 19:27:14.191: INFO: kube-proxy-ip-172-20-50-189.ec2.internal started at 2021-09-28 19:20:23 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 28 19:27:14.191: INFO: agnhost-primary-5db8ddd565-b2cpl started at 2021-09-28 19:24:15 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container primary ready: true, restart count 0
Sep 28 19:27:14.191: INFO: fail-once-local-7j5g5 started at 2021-09-28 19:27:05 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container c ready: false, restart count 1
Sep 28 19:27:14.191: INFO: csi-hostpath-resizer-0 started at 2021-09-28 19:25:42 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container csi-resizer ready: true, restart count 0
Sep 28 19:27:14.191: INFO: csi-hostpath-snapshotter-0 started at 2021-09-28 19:25:42 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.191: INFO: 	Container csi-snapshotter ready: true, restart count 0
Sep 28 19:27:14.191: INFO: execpod-affinitymwnmj started at 2021-09-28 19:25:56 +0000 UTC (0+1 container statuses recorded)
... skipping 21 lines ...
Sep 28 19:27:14.589: INFO: 
Logging pods the kubelet thinks is on node ip-172-20-61-119.ec2.internal
Sep 28 19:27:14.637: INFO: execpods5rnn started at <nil> (0+0 container statuses recorded)
Sep 28 19:27:14.637: INFO: pod-server-2 started at <nil> (0+0 container statuses recorded)
Sep 28 19:27:14.637: INFO: pod-bdcfaf32-9602-4446-a034-e3afa3b40804 started at <nil> (0+0 container statuses recorded)
Sep 28 19:27:14.637: INFO: local-client started at <nil> (0+0 container statuses recorded)
Sep 28 19:27:14.637: INFO: fail-once-local-4tc7j started at 2021-09-28 19:27:02 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.637: INFO: 	Container c ready: false, restart count 1
Sep 28 19:27:14.637: INFO: hostexec-ip-172-20-61-119.ec2.internal-gtjjj started at 2021-09-28 19:26:49 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.637: INFO: 	Container agnhost-container ready: true, restart count 0
Sep 28 19:27:14.637: INFO: execpodrfnxd started at 2021-09-28 19:24:10 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:14.637: INFO: 	Container agnhost-container ready: true, restart count 0
Sep 28 19:27:14.637: INFO: agnhost-replica-6bcf79b489-l2fvn started at 2021-09-28 19:24:15 +0000 UTC (0+1 container statuses recorded)
... skipping 38 lines ...
Sep 28 19:27:15.005: INFO: 	Container querier ready: true, restart count 0
Sep 28 19:27:15.005: INFO: 	Container webserver ready: true, restart count 0
Sep 28 19:27:15.005: INFO: pvc-volume-tester-w2q6b started at 2021-09-28 19:27:09 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:15.005: INFO: 	Container volume-tester ready: true, restart count 0
Sep 28 19:27:15.005: INFO: kopeio-networking-agent-zpsfs started at 2021-09-28 19:20:27 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:15.005: INFO: 	Container networking-agent ready: true, restart count 0
Sep 28 19:27:15.005: INFO: fail-once-local-hlgsw started at 2021-09-28 19:27:02 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:15.005: INFO: 	Container c ready: false, restart count 1
Sep 28 19:27:15.005: INFO: pvc-volume-tester-4pjpd started at 2021-09-28 19:27:11 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:15.005: INFO: 	Container volume-tester ready: true, restart count 0
Sep 28 19:27:15.005: INFO: csi-hostpath-provisioner-0 started at 2021-09-28 19:26:46 +0000 UTC (0+1 container statuses recorded)
Sep 28 19:27:15.005: INFO: 	Container csi-provisioner ready: true, restart count 0
Sep 28 19:27:15.005: INFO: nodeport-update-service-xllsk started at 2021-09-28 19:27:00 +0000 UTC (0+1 container statuses recorded)
... skipping 49 lines ...
• Failure [194.529 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:26:57.567: Unexpected error:
      <*errors.errorString | 0xc00033e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 6 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-120f2f0e-9de5-44bd-bfe7-2f09902aa254
STEP: Creating a pod to test consume secrets
Sep 28 19:27:13.766: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6" in namespace "projected-7259" to be "Succeeded or Failed"
Sep 28 19:27:13.803: INFO: Pod "pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.352975ms
Sep 28 19:27:15.842: INFO: Pod "pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075963856s
STEP: Saw pod success
Sep 28 19:27:15.842: INFO: Pod "pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6" satisfied condition "Succeeded or Failed"
Sep 28 19:27:15.880: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:27:15.962: INFO: Waiting for pod pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6 to disappear
Sep 28 19:27:15.999: INFO: Pod pod-projected-secrets-cea4880b-0d2c-4ab0-963f-3c71056020a6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
Sep 28 19:27:11.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep 28 19:27:11.386: INFO: Waiting up to 5m0s for pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741" in namespace "security-context-5762" to be "Succeeded or Failed"
Sep 28 19:27:11.426: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741": Phase="Pending", Reason="", readiness=false. Elapsed: 40.529437ms
Sep 28 19:27:13.466: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080541629s
Sep 28 19:27:15.505: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119537047s
Sep 28 19:27:17.545: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159089242s
Sep 28 19:27:19.584: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.19836008s
STEP: Saw pod success
Sep 28 19:27:19.584: INFO: Pod "security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741" satisfied condition "Succeeded or Failed"
Sep 28 19:27:19.623: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741 container test-container: <nil>
STEP: delete the pod
Sep 28 19:27:19.706: INFO: Waiting for pod security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741 to disappear
Sep 28 19:27:19.744: INFO: Pod security-context-35e8597c-11a5-4fdc-8ae0-815cb0aef741 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.672 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":6,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:19.836: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
Sep 28 19:24:28.711: INFO: The status of Pod netserver-3 is Running (Ready = true)
STEP: Creating test pods
Sep 28 19:24:43.058: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4
Sep 28 19:24:43.058: INFO: Going to poll 100.96.1.5 on port 8080 at least 0 times, with a maximum of 46 tries before failing
Sep 28 19:24:43.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:43.096: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:44.408: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:24:44.408: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:24:46.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:46.448: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:47.755: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:24:47.755: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:24:49.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:49.795: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:51.100: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:24:51.100: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:24:53.139: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:53.139: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:54.449: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:24:54.449: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:24:56.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:56.488: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:24:57.808: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:24:57.808: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:24:59.846: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:24:59.847: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:01.217: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:01.217: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:03.257: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:03.257: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:04.568: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:04.568: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:06.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:06.608: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:08.141: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:08.141: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:10.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:10.180: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:11.666: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:11.666: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:13.704: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:13.704: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:15.204: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:15.204: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:17.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:17.243: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:18.597: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:18.597: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:20.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:20.636: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:21.956: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:21.956: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:23.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:23.995: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:25.306: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:25.306: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:27.345: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:27.345: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:28.660: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:28.660: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:30.699: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:30.699: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:32.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:32.416: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:34.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:34.455: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:35.766: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:35.766: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:37.806: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:37.806: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:39.110: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:39.110: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:41.149: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:41.149: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:42.472: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:42.472: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:44.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:44.510: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:45.862: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:45.862: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:47.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:47.901: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:49.270: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:49.271: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:51.311: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:51.311: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:52.725: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:52.725: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:54.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:54.763: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:56.060: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:56.060: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:25:58.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:25:58.099: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:25:59.395: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:25:59.395: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:01.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:01.433: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:02.740: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:02.740: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:04.779: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:04.779: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:06.139: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:06.139: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:08.177: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:08.177: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:09.521: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:09.521: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:11.560: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:11.560: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:12.879: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:12.879: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:14.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:14.918: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:16.210: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:16.210: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:18.248: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:18.248: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:19.547: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:19.548: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:21.587: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:21.587: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:22.894: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:22.894: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:24.950: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:24.950: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:26.314: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:26.314: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:28.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:28.353: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:29.699: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:29.699: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:31.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:31.738: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:33.059: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:33.059: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:35.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:35.099: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:36.408: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:36.408: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:38.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:38.447: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:39.754: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:39.754: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:41.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:41.795: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:43.156: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:43.156: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:45.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:45.196: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:46.507: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:46.507: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:48.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:48.546: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:49.937: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:49.937: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:51.975: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:51.976: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:53.314: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:53.314: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:55.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:55.353: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:26:56.715: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:26:56.715: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:26:58.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:26:58.754: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:00.301: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:00.301: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:02.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:27:02.340: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:03.649: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:03.649: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:05.688: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:27:05.688: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:07.046: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:07.046: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:09.088: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:27:09.088: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:10.413: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:10.413: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:12.453: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:27:12.453: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:13.750: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:13.750: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:15.790: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:27:15.790: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:27:17.115: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Sep 28 19:27:17.115: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Sep 28 19:27:19.115: INFO: 
Output of kubectl describe pod pod-network-test-8876/netserver-0:

Sep 28 19:27:19.115: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-8876 describe pod netserver-0 --namespace=pod-network-test-8876'
Sep 28 19:27:19.407: INFO: stderr: ""
... skipping 241 lines ...
  Normal  Scheduled  3m18s  default-scheduler  Successfully assigned pod-network-test-8876/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulling    3m17s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     3m15s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.44141043s
  Normal  Created    3m15s  kubelet            Created container webserver
  Normal  Started    3m14s  kubelet            Started container webserver

Sep 28 19:27:20.283: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 
tries 46
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName
retrieved map[]
expected map[netserver-0:{}]

Full Stack Trace
... skipping 282 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 28 19:27:20.283: Error dialing HTTP node to pod failed to find expected endpoints, 
    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.5:8080/hostName
    retrieved map[]
    expected map[netserver-0:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":0,"skipped":7,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:23.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-4976" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":11,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:23.977: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 38 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":5,"skipped":38,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:02.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":38,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 51 lines ...
• [SLOW TEST:35.325 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":10,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:27.535: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:29.745: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:30.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9741" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":7,"skipped":41,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:30.166: INFO: Only supported for providers [openstack] (not aws)
... skipping 14 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:16.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":13,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:32.086: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
• [SLOW TEST:140.633 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":1,"skipped":23,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:33.384: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":0,"skipped":2,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:15.695: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Sep 28 19:27:26.588: INFO: PersistentVolumeClaim pvc-824jw found but phase is Pending instead of Bound.
Sep 28 19:27:28.626: INFO: PersistentVolumeClaim pvc-824jw found and phase=Bound (10.227173959s)
Sep 28 19:27:28.626: INFO: Waiting up to 3m0s for PersistentVolume local-ksbgg to have phase Bound
Sep 28 19:27:28.663: INFO: PersistentVolume local-ksbgg found and phase=Bound (37.353541ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kk4h
STEP: Creating a pod to test subpath
Sep 28 19:27:28.777: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kk4h" in namespace "provisioning-5424" to be "Succeeded or Failed"
Sep 28 19:27:28.815: INFO: Pod "pod-subpath-test-preprovisionedpv-kk4h": Phase="Pending", Reason="", readiness=false. Elapsed: 37.767535ms
Sep 28 19:27:30.854: INFO: Pod "pod-subpath-test-preprovisionedpv-kk4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076327131s
Sep 28 19:27:32.894: INFO: Pod "pod-subpath-test-preprovisionedpv-kk4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116239068s
STEP: Saw pod success
Sep 28 19:27:32.894: INFO: Pod "pod-subpath-test-preprovisionedpv-kk4h" satisfied condition "Succeeded or Failed"
Sep 28 19:27:32.934: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-preprovisionedpv-kk4h container test-container-subpath-preprovisionedpv-kk4h: <nil>
STEP: delete the pod
Sep 28 19:27:33.015: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kk4h to disappear
Sep 28 19:27:33.052: INFO: Pod pod-subpath-test-preprovisionedpv-kk4h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kk4h
Sep 28 19:27:33.052: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kk4h" in namespace "provisioning-5424"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:33.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-2933" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":2,"skipped":30,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:34.046: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 109 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep 28 19:27:30.375: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:27:30.413: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2s2w
STEP: Creating a pod to test subpath
Sep 28 19:27:30.454: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2s2w" in namespace "provisioning-7654" to be "Succeeded or Failed"
Sep 28 19:27:30.491: INFO: Pod "pod-subpath-test-inlinevolume-2s2w": Phase="Pending", Reason="", readiness=false. Elapsed: 37.336259ms
Sep 28 19:27:32.532: INFO: Pod "pod-subpath-test-inlinevolume-2s2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078689455s
Sep 28 19:27:34.572: INFO: Pod "pod-subpath-test-inlinevolume-2s2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1184492s
Sep 28 19:27:36.611: INFO: Pod "pod-subpath-test-inlinevolume-2s2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157351205s
STEP: Saw pod success
Sep 28 19:27:36.611: INFO: Pod "pod-subpath-test-inlinevolume-2s2w" satisfied condition "Succeeded or Failed"
Sep 28 19:27:36.649: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-inlinevolume-2s2w container test-container-subpath-inlinevolume-2s2w: <nil>
STEP: delete the pod
Sep 28 19:27:36.731: INFO: Waiting for pod pod-subpath-test-inlinevolume-2s2w to disappear
Sep 28 19:27:36.769: INFO: Pod pod-subpath-test-inlinevolume-2s2w no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2s2w
Sep 28 19:27:36.769: INFO: Deleting pod "pod-subpath-test-inlinevolume-2s2w" in namespace "provisioning-7654"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":47,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Sep 28 19:27:35.328: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3376 explain e2e-test-crd-publish-openapi-8129-crds.spec'
Sep 28 19:27:35.682: INFO: stderr: ""
Sep 28 19:27:35.682: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8129-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 28 19:27:35.682: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3376 explain e2e-test-crd-publish-openapi-8129-crds.spec.bars'
Sep 28 19:27:36.019: INFO: stderr: ""
Sep 28 19:27:36.019: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8129-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 28 19:27:36.019: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3376 explain e2e-test-crd-publish-openapi-8129-crds.spec.bars2'
Sep 28 19:27:36.381: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:40.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3376" for this suite.
... skipping 2 lines ...
• [SLOW TEST:12.612 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":11,"skipped":56,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:40.207: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 10 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 28 19:27:32.285: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 28 19:27:32.285: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8q84
STEP: Creating a pod to test subpath
Sep 28 19:27:32.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8q84" in namespace "provisioning-829" to be "Succeeded or Failed"
Sep 28 19:27:32.363: INFO: Pod "pod-subpath-test-inlinevolume-8q84": Phase="Pending", Reason="", readiness=false. Elapsed: 37.528719ms
Sep 28 19:27:34.401: INFO: Pod "pod-subpath-test-inlinevolume-8q84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075495634s
Sep 28 19:27:36.442: INFO: Pod "pod-subpath-test-inlinevolume-8q84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116973427s
Sep 28 19:27:38.482: INFO: Pod "pod-subpath-test-inlinevolume-8q84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156888658s
Sep 28 19:27:40.521: INFO: Pod "pod-subpath-test-inlinevolume-8q84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.195752335s
STEP: Saw pod success
Sep 28 19:27:40.521: INFO: Pod "pod-subpath-test-inlinevolume-8q84" satisfied condition "Succeeded or Failed"
Sep 28 19:27:40.559: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-inlinevolume-8q84 container test-container-volume-inlinevolume-8q84: <nil>
STEP: delete the pod
Sep 28 19:27:40.640: INFO: Waiting for pod pod-subpath-test-inlinevolume-8q84 to disappear
Sep 28 19:27:40.678: INFO: Pod pod-subpath-test-inlinevolume-8q84 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8q84
Sep 28 19:27:40.678: INFO: Deleting pod "pod-subpath-test-inlinevolume-8q84" in namespace "provisioning-829"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":14,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:40.840: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 276 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":9,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 83 lines ...
Sep 28 19:26:53.013: INFO: PersistentVolumeClaim csi-hostpath8lvkn found but phase is Pending instead of Bound.
Sep 28 19:26:55.053: INFO: PersistentVolumeClaim csi-hostpath8lvkn found but phase is Pending instead of Bound.
Sep 28 19:26:57.094: INFO: PersistentVolumeClaim csi-hostpath8lvkn found but phase is Pending instead of Bound.
Sep 28 19:26:59.133: INFO: PersistentVolumeClaim csi-hostpath8lvkn found and phase=Bound (12.282013212s)
STEP: Creating pod pod-subpath-test-dynamicpv-snpx
STEP: Creating a pod to test subpath
Sep 28 19:26:59.249: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-snpx" in namespace "provisioning-2153" to be "Succeeded or Failed"
Sep 28 19:26:59.289: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 39.022493ms
Sep 28 19:27:01.328: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078683275s
Sep 28 19:27:03.369: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119701585s
Sep 28 19:27:05.408: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158416007s
Sep 28 19:27:07.452: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202632754s
Sep 28 19:27:09.492: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.242290496s
Sep 28 19:27:11.543: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.293706759s
Sep 28 19:27:13.583: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.333419679s
Sep 28 19:27:15.622: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.372131154s
Sep 28 19:27:17.661: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.410999219s
Sep 28 19:27:19.700: INFO: Pod "pod-subpath-test-dynamicpv-snpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.450913515s
STEP: Saw pod success
Sep 28 19:27:19.701: INFO: Pod "pod-subpath-test-dynamicpv-snpx" satisfied condition "Succeeded or Failed"
Sep 28 19:27:19.739: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-dynamicpv-snpx container test-container-volume-dynamicpv-snpx: <nil>
STEP: delete the pod
Sep 28 19:27:19.821: INFO: Waiting for pod pod-subpath-test-dynamicpv-snpx to disappear
Sep 28 19:27:19.859: INFO: Pod pod-subpath-test-dynamicpv-snpx no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-snpx
Sep 28 19:27:19.859: INFO: Deleting pod "pod-subpath-test-dynamicpv-snpx" in namespace "provisioning-2153"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":61,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:43.595: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":8,"skipped":61,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:34.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:27:43.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434" in namespace "downward-api-8358" to be "Succeeded or Failed"
Sep 28 19:27:43.882: INFO: Pod "downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434": Phase="Pending", Reason="", readiness=false. Elapsed: 38.24451ms
Sep 28 19:27:45.921: INFO: Pod "downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077770233s
STEP: Saw pod success
Sep 28 19:27:45.921: INFO: Pod "downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434" satisfied condition "Succeeded or Failed"
Sep 28 19:27:45.960: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434 container client-container: <nil>
STEP: delete the pod
Sep 28 19:27:46.043: INFO: Waiting for pod downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434 to disappear
Sep 28 19:27:46.081: INFO: Pod downwardapi-volume-dbec390d-0a65-495f-81db-03edb86c9434 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:46.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8358" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":63,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:46.169: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":7,"skipped":35,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:41.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 57 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:46.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep 28 19:27:46.455: INFO: found topology map[topology.kubernetes.io/zone:us-east-1a]
Sep 28 19:27:46.455: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 28 19:27:46.456: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 22 lines ...
• [SLOW TEST:22.923 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:46.968: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:47.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":3,"skipped":33,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 28 19:27:44.357: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1ec3b9fe-544c-427e-bf3b-955f878ddcd5" in namespace "security-context-test-7640" to be "Succeeded or Failed"
Sep 28 19:27:44.395: INFO: Pod "alpine-nnp-false-1ec3b9fe-544c-427e-bf3b-955f878ddcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.224599ms
Sep 28 19:27:46.439: INFO: Pod "alpine-nnp-false-1ec3b9fe-544c-427e-bf3b-955f878ddcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081753183s
Sep 28 19:27:48.479: INFO: Pod "alpine-nnp-false-1ec3b9fe-544c-427e-bf3b-955f878ddcd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12222475s
Sep 28 19:27:48.479: INFO: Pod "alpine-nnp-false-1ec3b9fe-544c-427e-bf3b-955f878ddcd5" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:48.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7640" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:48.620: INFO: Only supported for providers [azure] (not aws)
... skipping 37 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:25:47.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 28 19:27:47.840: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e" in namespace "security-context-test-8749" to be "Succeeded or Failed"
Sep 28 19:27:47.878: INFO: Pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.637735ms
Sep 28 19:27:49.917: INFO: Pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07684043s
Sep 28 19:27:51.956: INFO: Pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115912016s
Sep 28 19:27:53.995: INFO: Pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155583398s
Sep 28 19:27:53.995: INFO: Pod "busybox-user-65534-fb7368f6-0489-4733-ad13-2fe45e54201e" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:53.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8749" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:54.085: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 150 lines ...
Sep 28 19:27:41.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep 28 19:27:41.359: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:27:41.435: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5655" in namespace "provisioning-5655" to be "Succeeded or Failed"
Sep 28 19:27:41.471: INFO: Pod "hostpath-symlink-prep-provisioning-5655": Phase="Pending", Reason="", readiness=false. Elapsed: 36.569445ms
Sep 28 19:27:43.509: INFO: Pod "hostpath-symlink-prep-provisioning-5655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074376828s
STEP: Saw pod success
Sep 28 19:27:43.509: INFO: Pod "hostpath-symlink-prep-provisioning-5655" satisfied condition "Succeeded or Failed"
Sep 28 19:27:43.509: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5655" in namespace "provisioning-5655"
Sep 28 19:27:43.550: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5655" to be fully deleted
Sep 28 19:27:43.586: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-rxh4
STEP: Creating a pod to test subpath
Sep 28 19:27:43.624: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-rxh4" in namespace "provisioning-5655" to be "Succeeded or Failed"
Sep 28 19:27:43.662: INFO: Pod "pod-subpath-test-inlinevolume-rxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.736718ms
Sep 28 19:27:45.699: INFO: Pod "pod-subpath-test-inlinevolume-rxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075631502s
Sep 28 19:27:47.737: INFO: Pod "pod-subpath-test-inlinevolume-rxh4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11349519s
Sep 28 19:27:49.775: INFO: Pod "pod-subpath-test-inlinevolume-rxh4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150900593s
STEP: Saw pod success
Sep 28 19:27:49.775: INFO: Pod "pod-subpath-test-inlinevolume-rxh4" satisfied condition "Succeeded or Failed"
Sep 28 19:27:49.811: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-inlinevolume-rxh4 container test-container-subpath-inlinevolume-rxh4: <nil>
STEP: delete the pod
Sep 28 19:27:49.897: INFO: Waiting for pod pod-subpath-test-inlinevolume-rxh4 to disappear
Sep 28 19:27:49.933: INFO: Pod pod-subpath-test-inlinevolume-rxh4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-rxh4
Sep 28 19:27:49.933: INFO: Deleting pod "pod-subpath-test-inlinevolume-rxh4" in namespace "provisioning-5655"
STEP: Deleting pod
Sep 28 19:27:49.972: INFO: Deleting pod "pod-subpath-test-inlinevolume-rxh4" in namespace "provisioning-5655"
Sep 28 19:27:50.048: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5655" in namespace "provisioning-5655" to be "Succeeded or Failed"
Sep 28 19:27:50.085: INFO: Pod "hostpath-symlink-prep-provisioning-5655": Phase="Pending", Reason="", readiness=false. Elapsed: 37.187885ms
Sep 28 19:27:52.123: INFO: Pod "hostpath-symlink-prep-provisioning-5655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074409149s
Sep 28 19:27:54.160: INFO: Pod "hostpath-symlink-prep-provisioning-5655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111705982s
STEP: Saw pod success
Sep 28 19:27:54.160: INFO: Pod "hostpath-symlink-prep-provisioning-5655" satisfied condition "Succeeded or Failed"
Sep 28 19:27:54.160: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5655" in namespace "provisioning-5655"
Sep 28 19:27:54.202: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5655" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:54.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5655" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:54.327: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Sep 28 19:27:25.782: INFO: PersistentVolumeClaim pvc-qm4zt found but phase is Pending instead of Bound.
Sep 28 19:27:27.820: INFO: PersistentVolumeClaim pvc-qm4zt found and phase=Bound (14.311692025s)
Sep 28 19:27:27.820: INFO: Waiting up to 3m0s for PersistentVolume local-z86zw to have phase Bound
Sep 28 19:27:27.857: INFO: PersistentVolume local-z86zw found and phase=Bound (36.310737ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ggx5
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:27:27.972: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ggx5" in namespace "provisioning-5366" to be "Succeeded or Failed"
Sep 28 19:27:28.009: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.363888ms
Sep 28 19:27:30.046: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073841858s
Sep 28 19:27:32.086: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113014751s
Sep 28 19:27:34.123: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150102103s
Sep 28 19:27:36.160: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187279242s
Sep 28 19:27:38.198: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2259872s
... skipping 3 lines ...
Sep 28 19:27:46.350: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Running", Reason="", readiness=true. Elapsed: 18.377847755s
Sep 28 19:27:48.388: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Running", Reason="", readiness=true. Elapsed: 20.415588766s
Sep 28 19:27:50.426: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Running", Reason="", readiness=true. Elapsed: 22.4537832s
Sep 28 19:27:52.467: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Running", Reason="", readiness=true. Elapsed: 24.494638406s
Sep 28 19:27:54.505: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.532134526s
STEP: Saw pod success
Sep 28 19:27:54.505: INFO: Pod "pod-subpath-test-preprovisionedpv-ggx5" satisfied condition "Succeeded or Failed"
Sep 28 19:27:54.542: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-ggx5 container test-container-subpath-preprovisionedpv-ggx5: <nil>
STEP: delete the pod
Sep 28 19:27:54.627: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ggx5 to disappear
Sep 28 19:27:54.665: INFO: Pod pod-subpath-test-preprovisionedpv-ggx5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ggx5
Sep 28 19:27:54.665: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ggx5" in namespace "provisioning-5366"
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:55.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-897" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":11,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:55.315: INFO: Only supported for providers [vsphere] (not aws)
... skipping 56 lines ...
• [SLOW TEST:6.719 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":11,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:55.399: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 177 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:57.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-2783" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":12,"skipped":84,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:57.189: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":44,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:58.488: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:27:58.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5369" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":13,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:27:58.512: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 62 lines ...
• [SLOW TEST:100.445 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:28:01.760: INFO: >>> kubeConfig: /root/.kube/config
[It] watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:46
Sep 28 19:28:01.761: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:28:01.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:28:01.879: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 79 lines ...
Sep 28 19:27:54.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 28 19:27:54.377: INFO: Waiting up to 5m0s for pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a" in namespace "security-context-9692" to be "Succeeded or Failed"
Sep 28 19:27:54.415: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.879239ms
Sep 28 19:27:56.454: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076783646s
Sep 28 19:27:58.493: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115294139s
Sep 28 19:28:00.544: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166553923s
Sep 28 19:28:02.583: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205857793s
STEP: Saw pod success
Sep 28 19:28:02.583: INFO: Pod "security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a" satisfied condition "Succeeded or Failed"
Sep 28 19:28:02.625: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a container test-container: <nil>
STEP: delete the pod
Sep 28 19:28:02.709: INFO: Waiting for pod security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a to disappear
Sep 28 19:28:02.747: INFO: Pod security-context-1ae6541a-a89d-4690-8f96-e841d164ca3a no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.690 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":5,"skipped":50,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:28:02.847: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:28:02.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:28:02.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6652" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:28:03.051: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
Sep 28 19:27:54.812: INFO: PersistentVolumeClaim pvc-m8sfk found but phase is Pending instead of Bound.
Sep 28 19:27:56.851: INFO: PersistentVolumeClaim pvc-m8sfk found and phase=Bound (12.268275605s)
Sep 28 19:27:56.851: INFO: Waiting up to 3m0s for PersistentVolume local-hjxv4 to have phase Bound
Sep 28 19:27:56.888: INFO: PersistentVolume local-hjxv4 found and phase=Bound (37.675551ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tfwt
STEP: Creating a pod to test subpath
Sep 28 19:27:57.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tfwt" in namespace "provisioning-9458" to be "Succeeded or Failed"
Sep 28 19:27:57.056: INFO: Pod "pod-subpath-test-preprovisionedpv-tfwt": Phase="Pending", Reason="", readiness=false. Elapsed: 37.55769ms
Sep 28 19:27:59.103: INFO: Pod "pod-subpath-test-preprovisionedpv-tfwt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085151965s
Sep 28 19:28:01.142: INFO: Pod "pod-subpath-test-preprovisionedpv-tfwt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124254489s
STEP: Saw pod success
Sep 28 19:28:01.142: INFO: Pod "pod-subpath-test-preprovisionedpv-tfwt" satisfied condition "Succeeded or Failed"
Sep 28 19:28:01.180: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-preprovisionedpv-tfwt container test-container-subpath-preprovisionedpv-tfwt: <nil>
STEP: delete the pod
Sep 28 19:28:01.266: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tfwt to disappear
Sep 28 19:28:01.304: INFO: Pod pod-subpath-test-preprovisionedpv-tfwt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tfwt
Sep 28 19:28:01.304: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tfwt" in namespace "provisioning-9458"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":48,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:28:03.160: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
STEP: Destroying namespace "node-problem-detector-2753" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.257 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 42 lines ...
      Driver hostPath doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":4,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:27:49.460: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Sep 28 19:27:56.302: INFO: PersistentVolumeClaim pvc-th4n5 found but phase is Pending instead of Bound.
Sep 28 19:27:58.340: INFO: PersistentVolumeClaim pvc-th4n5 found and phase=Bound (6.15377514s)
Sep 28 19:27:58.340: INFO: Waiting up to 3m0s for PersistentVolume local-c6kxk to have phase Bound
Sep 28 19:27:58.378: INFO: PersistentVolume local-c6kxk found and phase=Bound (38.119569ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zfb2
STEP: Creating a pod to test subpath
Sep 28 19:27:58.494: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zfb2" in namespace "provisioning-6226" to be "Succeeded or Failed"
Sep 28 19:27:58.532: INFO: Pod "pod-subpath-test-preprovisionedpv-zfb2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.069878ms
Sep 28 19:28:00.572: INFO: Pod "pod-subpath-test-preprovisionedpv-zfb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077403636s
Sep 28 19:28:02.612: INFO: Pod "pod-subpath-test-preprovisionedpv-zfb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117394115s
STEP: Saw pod success
Sep 28 19:28:02.612: INFO: Pod "pod-subpath-test-preprovisionedpv-zfb2" satisfied condition "Succeeded or Failed"
Sep 28 19:28:02.650: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-zfb2 container test-container-subpath-preprovisionedpv-zfb2: <nil>
STEP: delete the pod
Sep 28 19:28:02.739: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zfb2 to disappear
Sep 28 19:28:02.777: INFO: Pod pod-subpath-test-preprovisionedpv-zfb2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zfb2
Sep 28 19:28:02.777: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zfb2" in namespace "provisioning-6226"
... skipping 21 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:28:03.404: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] 


... skipping 64439 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":39,"skipped":257,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:43:46.809: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
Sep 28 19:43:41.439: INFO: PersistentVolumeClaim pvc-g6767 found but phase is Pending instead of Bound.
Sep 28 19:43:43.476: INFO: PersistentVolumeClaim pvc-g6767 found and phase=Bound (10.217299525s)
Sep 28 19:43:43.476: INFO: Waiting up to 3m0s for PersistentVolume local-qznv7 to have phase Bound
Sep 28 19:43:43.510: INFO: PersistentVolume local-qznv7 found and phase=Bound (34.691311ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-65jz
STEP: Creating a pod to test subpath
Sep 28 19:43:43.629: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-65jz" in namespace "provisioning-652" to be "Succeeded or Failed"
Sep 28 19:43:43.672: INFO: Pod "pod-subpath-test-preprovisionedpv-65jz": Phase="Pending", Reason="", readiness=false. Elapsed: 43.574412ms
Sep 28 19:43:45.709: INFO: Pod "pod-subpath-test-preprovisionedpv-65jz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080176381s
Sep 28 19:43:47.749: INFO: Pod "pod-subpath-test-preprovisionedpv-65jz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120445516s
STEP: Saw pod success
Sep 28 19:43:47.750: INFO: Pod "pod-subpath-test-preprovisionedpv-65jz" satisfied condition "Succeeded or Failed"
Sep 28 19:43:47.785: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-65jz container test-container-subpath-preprovisionedpv-65jz: <nil>
STEP: delete the pod
Sep 28 19:43:47.866: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-65jz to disappear
Sep 28 19:43:47.901: INFO: Pod pod-subpath-test-preprovisionedpv-65jz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-65jz
Sep 28 19:43:47.901: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-65jz" in namespace "provisioning-652"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":27,"skipped":204,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:43:48.490: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
• [SLOW TEST:29.922 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":28,"skipped":157,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 28 19:38:35.992: INFO: PersistentVolume nfs-svb2n found and phase=Bound (36.800728ms)
Sep 28 19:38:36.029: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x5pxp] to have phase Bound
Sep 28 19:38:36.067: INFO: PersistentVolumeClaim pvc-x5pxp found and phase=Bound (38.186308ms)
STEP: Checking pod has write access to PersistentVolumes
Sep 28 19:38:36.104: INFO: Creating nfs test pod
Sep 28 19:38:36.142: INFO: Pod should terminate with exitcode 0 (success)
Sep 28 19:38:36.142: INFO: Waiting up to 5m0s for pod "pvc-tester-4gg4f" in namespace "pv-3595" to be "Succeeded or Failed"
Sep 28 19:38:36.180: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.588992ms
Sep 28 19:38:38.217: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07458692s
Sep 28 19:38:40.255: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11299626s
Sep 28 19:38:42.293: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150530448s
Sep 28 19:38:44.331: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189032372s
Sep 28 19:38:46.372: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229527492s
... skipping 138 lines ...
Sep 28 19:43:29.734: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.592183371s
Sep 28 19:43:31.772: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.629820974s
Sep 28 19:43:33.810: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.668147826s
Sep 28 19:43:35.847: INFO: Pod "pvc-tester-4gg4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.705159261s
Sep 28 19:43:37.849: INFO: Deleting pod "pvc-tester-4gg4f" in namespace "pv-3595"
Sep 28 19:43:37.888: INFO: Wait up to 5m0s for pod "pvc-tester-4gg4f" to be fully deleted
Sep 28 19:43:45.963: FAIL: Unexpected error:
    <*errors.errorString | 0xc0044f31e0>: {
        s: "pod \"pvc-tester-4gg4f\" did not exit with Success: pod \"pvc-tester-4gg4f\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-4gg4f\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-4gg4f" did not exit with Success: pod "pvc-tester-4gg4f" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-4gg4f" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002eef980)
... skipping 26 lines ...
Sep 28 19:43:54.343: INFO: At 2021-09-28 19:38:32 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-189.ec2.internal} Created: Created container nfs-server
Sep 28 19:43:54.343: INFO: At 2021-09-28 19:38:32 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-189.ec2.internal} Started: Started container nfs-server
Sep 28 19:43:54.344: INFO: At 2021-09-28 19:38:35 +0000 UTC - event for pvc-8slhb: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Sep 28 19:43:54.344: INFO: At 2021-09-28 19:38:35 +0000 UTC - event for pvc-qbjk2: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Sep 28 19:43:54.344: INFO: At 2021-09-28 19:38:36 +0000 UTC - event for pvc-tester-4gg4f: {default-scheduler } Scheduled: Successfully assigned pv-3595/pvc-tester-4gg4f to ip-172-20-62-211.ec2.internal
Sep 28 19:43:54.344: INFO: At 2021-09-28 19:40:39 +0000 UTC - event for pvc-tester-4gg4f: {kubelet ip-172-20-62-211.ec2.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-9dkxj]: timed out waiting for the condition
Sep 28 19:43:54.344: INFO: At 2021-09-28 19:41:40 +0000 UTC - event for pvc-tester-4gg4f: {kubelet ip-172-20-62-211.ec2.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-6fqx2" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.4.33:/exports /var/lib/kubelet/pods/764c1d9a-2283-4e8c-a340-aad83aa1f1ad/volumes/kubernetes.io~nfs/nfs-6fqx2
Output: mount.nfs: Connection timed out

Sep 28 19:43:54.344: INFO: At 2021-09-28 19:43:46 +0000 UTC - event for nfs-server: {kubelet ip-172-20-50-189.ec2.internal} Killing: Stopping container nfs-server
Sep 28 19:43:54.382: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 211 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233

      Sep 28 19:43:45.963: Unexpected error:
          <*errors.errorString | 0xc0044f31e0>: {
              s: "pod \"pvc-tester-4gg4f\" did not exit with Success: pod \"pvc-tester-4gg4f\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-4gg4f\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-4gg4f" did not exit with Success: pod "pvc-tester-4gg4f" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-4gg4f" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":13,"skipped":52,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:43:56.586: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 135 lines ...
Sep 28 19:43:44.559: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:44.926: INFO: Exec stderr: ""
Sep 28 19:43:47.044: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-6806"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-6806"/host; echo host > "/var/lib/kubelet/mount-propagation-6806"/host/file] Namespace:mount-propagation-6806 PodName:hostexec-ip-172-20-61-119.ec2.internal-498mc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 28 19:43:47.044: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:47.388: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6806 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:47.388: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:47.673: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Sep 28 19:43:47.711: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6806 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:47.711: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:48.002: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:48.039: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6806 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:48.039: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:48.323: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:48.361: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6806 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:48.361: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:48.685: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:48.722: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6806 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:48.722: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:49.004: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Sep 28 19:43:49.042: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6806 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:49.042: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:49.368: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Sep 28 19:43:49.406: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6806 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:49.406: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:49.689: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Sep 28 19:43:49.727: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6806 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:49.727: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:50.012: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:50.050: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6806 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:50.050: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:50.387: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:50.425: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6806 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:50.425: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:50.731: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Sep 28 19:43:50.769: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6806 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:50.769: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:51.081: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:51.119: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6806 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:51.119: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:51.430: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:51.468: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6806 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:51.468: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:51.758: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Sep 28 19:43:51.796: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6806 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:51.796: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:52.099: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:52.136: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6806 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:52.136: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:52.442: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:52.479: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6806 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:52.480: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:52.783: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:52.821: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6806 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:52.821: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:53.120: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:53.158: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6806 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:53.158: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:53.471: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:53.509: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6806 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:53.509: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:53.803: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Sep 28 19:43:53.841: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6806 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:43:53.841: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:54.134: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Sep 28 19:43:54.135: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-6806"/master/file` = master] Namespace:mount-propagation-6806 PodName:hostexec-ip-172-20-61-119.ec2.internal-498mc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 28 19:43:54.135: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:54.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-6806"/slave/file] Namespace:mount-propagation-6806 PodName:hostexec-ip-172-20-61-119.ec2.internal-498mc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 28 19:43:54.432: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:43:54.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-6806"/host] Namespace:mount-propagation-6806 PodName:hostexec-ip-172-20-61-119.ec2.internal-498mc ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 28 19:43:54.827: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:27.954 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":36,"skipped":229,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:43:56.934: INFO: Only supported for providers [openstack] (not aws)
... skipping 37 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":23,"skipped":199,"failed":2,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:43:24.089: INFO: >>> kubeConfig: /root/.kube/config
... skipping 41 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:43:57.205: INFO: Waiting up to 5m0s for pod "metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b" in namespace "downward-api-6773" to be "Succeeded or Failed"
Sep 28 19:43:57.243: INFO: Pod "metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.710953ms
Sep 28 19:43:59.281: INFO: Pod "metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075641222s
STEP: Saw pod success
Sep 28 19:43:59.281: INFO: Pod "metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b" satisfied condition "Succeeded or Failed"
Sep 28 19:43:59.319: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b container client-container: <nil>
STEP: delete the pod
Sep 28 19:43:59.410: INFO: Waiting for pod metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b to disappear
Sep 28 19:43:59.448: INFO: Pod metadata-volume-e2aca521-6c52-48f1-9032-1cf422d38b8b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:43:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6773" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":37,"skipped":246,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
• [SLOW TEST:61.607 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":24,"skipped":185,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:03.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9710" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":40,"skipped":260,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:43:59.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:5.374 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":41,"skipped":260,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:8.881 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":14,"skipped":69,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:08.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7257" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":42,"skipped":263,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:05.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-aaabecb2-e366-4c55-8de4-46e2aa6a6d33
STEP: Creating a pod to test consume configMaps
Sep 28 19:44:05.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a" in namespace "configmap-4009" to be "Succeeded or Failed"
Sep 28 19:44:05.963: INFO: Pod "pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.758511ms
Sep 28 19:44:08.002: INFO: Pod "pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.096985993s
STEP: Saw pod success
Sep 28 19:44:08.002: INFO: Pod "pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a" satisfied condition "Succeeded or Failed"
Sep 28 19:44:08.040: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:44:08.124: INFO: Waiting for pod pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a to disappear
Sep 28 19:44:08.162: INFO: Pod pod-configmaps-ce00b716-eb63-45ac-adc8-1d257b9de45a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:08.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4009" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":70,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 28 19:44:08.384: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49" in namespace "projected-2629" to be "Succeeded or Failed"
Sep 28 19:44:08.450: INFO: Pod "downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49": Phase="Pending", Reason="", readiness=false. Elapsed: 65.44592ms
Sep 28 19:44:10.489: INFO: Pod "downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104523738s
STEP: Saw pod success
Sep 28 19:44:10.489: INFO: Pod "downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49" satisfied condition "Succeeded or Failed"
Sep 28 19:44:10.527: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49 container client-container: <nil>
STEP: delete the pod
Sep 28 19:44:10.611: INFO: Waiting for pod downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49 to disappear
Sep 28 19:44:10.649: INFO: Pod downwardapi-volume-c6ced6ed-7277-46cb-9ac4-93baabde8c49 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:10.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2629" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":264,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:10.743: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":190,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:03.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
• [SLOW TEST:7.113 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":26,"skipped":190,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:10.944: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
Sep 28 19:43:41.672: INFO: PersistentVolumeClaim pvc-njtxw found but phase is Pending instead of Bound.
Sep 28 19:43:43.710: INFO: PersistentVolumeClaim pvc-njtxw found and phase=Bound (10.225534462s)
Sep 28 19:43:43.710: INFO: Waiting up to 3m0s for PersistentVolume local-vmpv6 to have phase Bound
Sep 28 19:43:43.746: INFO: PersistentVolume local-vmpv6 found and phase=Bound (36.766455ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bxkm
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:43:43.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bxkm" in namespace "provisioning-2649" to be "Succeeded or Failed"
Sep 28 19:43:43.895: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 36.70597ms
Sep 28 19:43:45.933: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074769014s
Sep 28 19:43:47.971: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112846938s
Sep 28 19:43:50.010: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151141126s
Sep 28 19:43:52.047: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.188954302s
Sep 28 19:43:54.085: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.226691274s
... skipping 3 lines ...
Sep 28 19:44:02.240: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.381652002s
Sep 28 19:44:04.278: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.419205471s
Sep 28 19:44:06.365: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.506539913s
Sep 28 19:44:08.415: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Running", Reason="", readiness=true. Elapsed: 24.556603066s
Sep 28 19:44:10.453: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.594451333s
STEP: Saw pod success
Sep 28 19:44:10.453: INFO: Pod "pod-subpath-test-preprovisionedpv-bxkm" satisfied condition "Succeeded or Failed"
Sep 28 19:44:10.490: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-bxkm container test-container-subpath-preprovisionedpv-bxkm: <nil>
STEP: delete the pod
Sep 28 19:44:10.580: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bxkm to disappear
Sep 28 19:44:10.619: INFO: Pod pod-subpath-test-preprovisionedpv-bxkm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bxkm
Sep 28 19:44:10.619: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bxkm" in namespace "provisioning-2649"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":38,"skipped":213,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:12.245: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Sep 28 19:44:08.500: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 28 19:44:08.500: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-44g7
STEP: Creating a pod to test subpath
Sep 28 19:44:08.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-44g7" in namespace "provisioning-231" to be "Succeeded or Failed"
Sep 28 19:44:08.610: INFO: Pod "pod-subpath-test-inlinevolume-44g7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.616834ms
Sep 28 19:44:10.649: INFO: Pod "pod-subpath-test-inlinevolume-44g7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086776853s
Sep 28 19:44:12.686: INFO: Pod "pod-subpath-test-inlinevolume-44g7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124328316s
STEP: Saw pod success
Sep 28 19:44:12.686: INFO: Pod "pod-subpath-test-inlinevolume-44g7" satisfied condition "Succeeded or Failed"
Sep 28 19:44:12.723: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-inlinevolume-44g7 container test-container-subpath-inlinevolume-44g7: <nil>
STEP: delete the pod
Sep 28 19:44:12.814: INFO: Waiting for pod pod-subpath-test-inlinevolume-44g7 to disappear
Sep 28 19:44:12.851: INFO: Pod pod-subpath-test-inlinevolume-44g7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-44g7
Sep 28 19:44:12.851: INFO: Deleting pod "pod-subpath-test-inlinevolume-44g7" in namespace "provisioning-231"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:12.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-231" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":16,"skipped":71,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:13.020: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 127 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":25,"skipped":195,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:42:23.273: INFO: >>> kubeConfig: /root/.kube/config
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":26,"skipped":195,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:13.565: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":28,"skipped":210,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:10.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-3097/configmap-test-efb5bf8b-712e-400b-b1c5-f07a04193c22
STEP: Creating a pod to test consume configMaps
Sep 28 19:44:11.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d" in namespace "configmap-3097" to be "Succeeded or Failed"
Sep 28 19:44:11.252: INFO: Pod "pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.085863ms
Sep 28 19:44:13.324: INFO: Pod "pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.109076732s
STEP: Saw pod success
Sep 28 19:44:13.324: INFO: Pod "pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d" satisfied condition "Succeeded or Failed"
Sep 28 19:44:13.366: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d container env-test: <nil>
STEP: delete the pod
Sep 28 19:44:13.498: INFO: Waiting for pod pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d to disappear
Sep 28 19:44:13.550: INFO: Pod pod-configmaps-62e4b811-fd32-498c-b02a-9ef0398a128d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
Sep 28 19:44:13.926: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.265 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":42,"skipped":292,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:43:20.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:53.440 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":292,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:14.300: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:15.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7801" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":44,"skipped":271,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:43:42.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 28 19:43:43.159: INFO: created pod
Sep 28 19:43:43.159: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-7713" to be "Succeeded or Failed"
Sep 28 19:43:43.197: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 37.620708ms
Sep 28 19:43:45.236: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076587063s
STEP: Saw pod success
Sep 28 19:43:45.236: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 28 19:44:15.236: INFO: polling logs
Sep 28 19:44:15.275: INFO: Pod logs: 
2021/09/28 19:43:43 OK: Got token
2021/09/28 19:43:43 validating with in-cluster discovery
2021/09/28 19:43:43 OK: got issuer https://api.internal.e2e-b08e534318-62691.test-cncf-aws.k8s.io
2021/09/28 19:43:43 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:32.499 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":19,"skipped":180,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Sep 28 19:44:16.603: INFO: Creating a PV followed by a PVC
Sep 28 19:44:16.674: INFO: Waiting for PV local-pvw6n9l to bind to PVC pvc-98sf2
Sep 28 19:44:16.674: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-98sf2] to have phase Bound
Sep 28 19:44:16.719: INFO: PersistentVolumeClaim pvc-98sf2 found and phase=Bound (44.946692ms)
Sep 28 19:44:16.719: INFO: Waiting up to 3m0s for PersistentVolume local-pvw6n9l to have phase Bound
Sep 28 19:44:16.755: INFO: PersistentVolume local-pvw6n9l found and phase=Bound (36.425854ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep 28 19:44:16.829: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-adce8973-8d36-4d76-bdf2-38936ef3d672] Namespace:persistent-local-volumes-test-7272 PodName:hostexec-ip-172-20-61-119.ec2.internal-vc4k5 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 28 19:44:16.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:17.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "persistent-local-volumes-test-7272" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":29,"skipped":217,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:17.895: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
Sep 28 19:44:10.995: INFO: PersistentVolumeClaim pvc-fjznv found but phase is Pending instead of Bound.
Sep 28 19:44:13.032: INFO: PersistentVolumeClaim pvc-fjznv found and phase=Bound (10.232842249s)
Sep 28 19:44:13.033: INFO: Waiting up to 3m0s for PersistentVolume local-qn9tn to have phase Bound
Sep 28 19:44:13.070: INFO: PersistentVolume local-qn9tn found and phase=Bound (37.477583ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-vxf6
STEP: Creating a pod to test exec-volume-test
Sep 28 19:44:13.210: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vxf6" in namespace "volume-8412" to be "Succeeded or Failed"
Sep 28 19:44:13.275: INFO: Pod "exec-volume-test-preprovisionedpv-vxf6": Phase="Pending", Reason="", readiness=false. Elapsed: 64.752985ms
Sep 28 19:44:15.315: INFO: Pod "exec-volume-test-preprovisionedpv-vxf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105024625s
Sep 28 19:44:17.354: INFO: Pod "exec-volume-test-preprovisionedpv-vxf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143531212s
STEP: Saw pod success
Sep 28 19:44:17.354: INFO: Pod "exec-volume-test-preprovisionedpv-vxf6" satisfied condition "Succeeded or Failed"
Sep 28 19:44:17.391: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod exec-volume-test-preprovisionedpv-vxf6 container exec-container-preprovisionedpv-vxf6: <nil>
STEP: delete the pod
Sep 28 19:44:17.472: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vxf6 to disappear
Sep 28 19:44:17.510: INFO: Pod exec-volume-test-preprovisionedpv-vxf6 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-vxf6
Sep 28 19:44:17.510: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vxf6" in namespace "volume-8412"
... skipping 38 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 28 19:43:18.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454998, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454998, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454998, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454998, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 28 19:43:21.367: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Sep 28 19:43:31.520: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:43:41.697: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:43:51.798: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:44:01.907: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:44:11.983: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:44:11.983: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 458 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• Failure [63.252 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:44:11.983: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":26,"skipped":185,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:19.768: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":38,"skipped":250,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:18.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:20.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4606" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":39,"skipped":250,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:20.858: INFO: Only supported for providers [openstack] (not aws)
... skipping 39 lines ...
• [SLOW TEST:242.986 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":175,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 75 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":20,"skipped":182,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:26.124: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 49 lines ...
• [SLOW TEST:15.849 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:583
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":39,"skipped":222,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:15.970 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":190,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:36.047: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 159 lines ...
Sep 28 19:44:24.959: INFO: Waiting for pod aws-client to disappear
Sep 28 19:44:24.994: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 28 19:44:24.994: INFO: Deleting PersistentVolumeClaim "pvc-8gl5j"
Sep 28 19:44:25.032: INFO: Deleting PersistentVolume "aws-8zghl"
Sep 28 19:44:25.235: INFO: Couldn't delete PD "aws://us-east-1a/vol-012e2c216ce76dbb7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-012e2c216ce76dbb7 is currently attached to i-03f17841d09a5163a
	status code: 400, request id: 06a99fe4-df6b-4f5a-be5e-0a7f6149315e
Sep 28 19:44:30.605: INFO: Couldn't delete PD "aws://us-east-1a/vol-012e2c216ce76dbb7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-012e2c216ce76dbb7 is currently attached to i-03f17841d09a5163a
	status code: 400, request id: bf3da616-8fba-40dd-8eb0-8c1ba17d8194
Sep 28 19:44:36.053: INFO: Successfully deleted PD "aws://us-east-1a/vol-012e2c216ce76dbb7".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:36.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7064" for this suite.
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":40,"skipped":467,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 19 lines ...
Sep 28 19:44:26.550: INFO: PersistentVolumeClaim pvc-5lf9v found but phase is Pending instead of Bound.
Sep 28 19:44:28.587: INFO: PersistentVolumeClaim pvc-5lf9v found and phase=Bound (10.226294242s)
Sep 28 19:44:28.588: INFO: Waiting up to 3m0s for PersistentVolume local-xrlxs to have phase Bound
Sep 28 19:44:28.624: INFO: PersistentVolume local-xrlxs found and phase=Bound (36.658858ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6wd2
STEP: Creating a pod to test exec-volume-test
Sep 28 19:44:28.740: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6wd2" in namespace "volume-5765" to be "Succeeded or Failed"
Sep 28 19:44:28.777: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.888218ms
Sep 28 19:44:30.814: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07433261s
Sep 28 19:44:32.853: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.113034248s
Sep 28 19:44:34.891: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2": Phase="Running", Reason="", readiness=true. Elapsed: 6.151640323s
Sep 28 19:44:36.931: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.191015278s
STEP: Saw pod success
Sep 28 19:44:36.931: INFO: Pod "exec-volume-test-preprovisionedpv-6wd2" satisfied condition "Succeeded or Failed"
Sep 28 19:44:36.968: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod exec-volume-test-preprovisionedpv-6wd2 container exec-container-preprovisionedpv-6wd2: <nil>
STEP: delete the pod
Sep 28 19:44:37.505: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6wd2 to disappear
Sep 28 19:44:37.542: INFO: Pod exec-volume-test-preprovisionedpv-6wd2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6wd2
Sep 28 19:44:37.542: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6wd2" in namespace "volume-5765"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":17,"skipped":96,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:38.109: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:38.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-391" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":18,"skipped":105,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:38.481: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-02859afd-e4a5-4187-8da0-000214488764
STEP: Creating a pod to test consume secrets
Sep 28 19:44:36.538: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530" in namespace "projected-5046" to be "Succeeded or Failed"
Sep 28 19:44:36.573: INFO: Pod "pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530": Phase="Pending", Reason="", readiness=false. Elapsed: 34.937151ms
Sep 28 19:44:38.610: INFO: Pod "pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07221657s
STEP: Saw pod success
Sep 28 19:44:38.610: INFO: Pod "pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530" satisfied condition "Succeeded or Failed"
Sep 28 19:44:38.646: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:44:38.724: INFO: Waiting for pod pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530 to disappear
Sep 28 19:44:38.759: INFO: Pod pod-projected-secrets-59de1c33-adf8-412f-901d-fc1430123530 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:38.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5046" for this suite.
STEP: Destroying namespace "secret-namespace-1775" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":41,"skipped":469,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:38.882: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:38.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7486" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":19,"skipped":134,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:39.062: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:39.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9391" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":20,"skipped":139,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:40.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5221" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":21,"skipped":140,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:38.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 28 19:44:39.113: INFO: Waiting up to 5m0s for pod "test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4" in namespace "svcaccounts-7539" to be "Succeeded or Failed"
Sep 28 19:44:39.148: INFO: Pod "test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.070247ms
Sep 28 19:44:41.188: INFO: Pod "test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074707175s
STEP: Saw pod success
Sep 28 19:44:41.188: INFO: Pod "test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4" satisfied condition "Succeeded or Failed"
Sep 28 19:44:41.224: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4 container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:44:41.301: INFO: Waiting for pod test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4 to disappear
Sep 28 19:44:41.336: INFO: Pod test-pod-db6da900-9674-43dc-aadd-53ddcb28b8f4 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:41.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7539" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":42,"skipped":472,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:40.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-5a9ecc25-27a5-4431-ae47-b2b08be8d5e6
STEP: Creating a pod to test consume secrets
Sep 28 19:44:40.866: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb" in namespace "projected-6741" to be "Succeeded or Failed"
Sep 28 19:44:40.903: INFO: Pod "pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.068027ms
Sep 28 19:44:42.946: INFO: Pod "pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080590395s
STEP: Saw pod success
Sep 28 19:44:42.946: INFO: Pod "pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb" satisfied condition "Succeeded or Failed"
Sep 28 19:44:42.985: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb container secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:44:43.074: INFO: Waiting for pod pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb to disappear
Sep 28 19:44:43.114: INFO: Pod pod-projected-secrets-979c00d6-9b2c-46fc-81c3-bb00380b23eb no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:43.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6741" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":142,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:43.212: INFO: Only supported for providers [azure] (not aws)
... skipping 37 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":195,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:13.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 72 lines ...
Sep 28 19:44:41.317: INFO: PersistentVolumeClaim pvc-qj4bl found but phase is Pending instead of Bound.
Sep 28 19:44:43.356: INFO: PersistentVolumeClaim pvc-qj4bl found and phase=Bound (4.113370134s)
Sep 28 19:44:43.356: INFO: Waiting up to 3m0s for PersistentVolume local-tcmck to have phase Bound
Sep 28 19:44:43.393: INFO: PersistentVolume local-tcmck found and phase=Bound (37.118182ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-zdjv
STEP: Creating a pod to test exec-volume-test
Sep 28 19:44:43.508: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-zdjv" in namespace "volume-1428" to be "Succeeded or Failed"
Sep 28 19:44:43.545: INFO: Pod "exec-volume-test-preprovisionedpv-zdjv": Phase="Pending", Reason="", readiness=false. Elapsed: 37.317911ms
Sep 28 19:44:45.587: INFO: Pod "exec-volume-test-preprovisionedpv-zdjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078767168s
STEP: Saw pod success
Sep 28 19:44:45.587: INFO: Pod "exec-volume-test-preprovisionedpv-zdjv" satisfied condition "Succeeded or Failed"
Sep 28 19:44:45.625: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod exec-volume-test-preprovisionedpv-zdjv container exec-container-preprovisionedpv-zdjv: <nil>
STEP: delete the pod
Sep 28 19:44:45.715: INFO: Waiting for pod exec-volume-test-preprovisionedpv-zdjv to disappear
Sep 28 19:44:45.753: INFO: Pod exec-volume-test-preprovisionedpv-zdjv no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-zdjv
Sep 28 19:44:45.753: INFO: Deleting pod "exec-volume-test-preprovisionedpv-zdjv" in namespace "volume-1428"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":28,"skipped":217,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:46.945: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 173 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":44,"skipped":307,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:47.045: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 120 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-dfbe9ede-ba62-4723-89a9-e451b9bf98e6
STEP: Creating a pod to test consume configMaps
Sep 28 19:44:47.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e" in namespace "configmap-6116" to be "Succeeded or Failed"
Sep 28 19:44:47.348: INFO: Pod "pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.030255ms
Sep 28 19:44:49.383: INFO: Pod "pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070147577s
STEP: Saw pod success
Sep 28 19:44:49.383: INFO: Pod "pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e" satisfied condition "Succeeded or Failed"
Sep 28 19:44:49.418: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e container agnhost-container: <nil>
STEP: delete the pod
Sep 28 19:44:49.492: INFO: Waiting for pod pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e to disappear
Sep 28 19:44:49.527: INFO: Pod pod-configmaps-94b486e3-a9f6-4078-8d05-66b98e2f2d4e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6116" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":311,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:49.617: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:49.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9959" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":46,"skipped":327,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
Sep 28 19:44:45.954: INFO: PersistentVolumeClaim pvc-5njvx found and phase=Bound (36.851367ms)
Sep 28 19:44:45.954: INFO: Waiting up to 3m0s for PersistentVolume nfs-hmvwz to have phase Bound
Sep 28 19:44:45.991: INFO: PersistentVolume nfs-hmvwz found and phase=Bound (36.694083ms)
STEP: Checking pod has write access to PersistentVolume
Sep 28 19:44:46.065: INFO: Creating nfs test pod
Sep 28 19:44:46.103: INFO: Pod should terminate with exitcode 0 (success)
Sep 28 19:44:46.103: INFO: Waiting up to 5m0s for pod "pvc-tester-k947n" in namespace "pv-9023" to be "Succeeded or Failed"
Sep 28 19:44:46.140: INFO: Pod "pvc-tester-k947n": Phase="Pending", Reason="", readiness=false. Elapsed: 36.829009ms
Sep 28 19:44:48.178: INFO: Pod "pvc-tester-k947n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074973165s
STEP: Saw pod success
Sep 28 19:44:48.178: INFO: Pod "pvc-tester-k947n" satisfied condition "Succeeded or Failed"
Sep 28 19:44:48.178: INFO: Pod pvc-tester-k947n succeeded 
Sep 28 19:44:48.178: INFO: Deleting pod "pvc-tester-k947n" in namespace "pv-9023"
Sep 28 19:44:48.221: INFO: Wait up to 5m0s for pod "pvc-tester-k947n" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 28 19:44:48.258: INFO: Deleting PVC pvc-5njvx to trigger reclamation of PV nfs-hmvwz
Sep 28 19:44:48.258: INFO: Deleting PersistentVolumeClaim "pvc-5njvx"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":23,"skipped":157,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Sep 28 19:44:40.385: INFO: PersistentVolumeClaim pvc-24plq found but phase is Pending instead of Bound.
Sep 28 19:44:42.424: INFO: PersistentVolumeClaim pvc-24plq found and phase=Bound (14.310854186s)
Sep 28 19:44:42.424: INFO: Waiting up to 3m0s for PersistentVolume local-8zgzj to have phase Bound
Sep 28 19:44:42.462: INFO: PersistentVolume local-8zgzj found and phase=Bound (38.177192ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cncf
STEP: Creating a pod to test subpath
Sep 28 19:44:42.577: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cncf" in namespace "provisioning-1692" to be "Succeeded or Failed"
Sep 28 19:44:42.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Pending", Reason="", readiness=false. Elapsed: 37.984806ms
Sep 28 19:44:44.655: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077414325s
Sep 28 19:44:46.697: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119754021s
Sep 28 19:44:48.736: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158991106s
Sep 28 19:44:50.775: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197580028s
Sep 28 19:44:52.814: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.236825486s
STEP: Saw pod success
Sep 28 19:44:52.814: INFO: Pod "pod-subpath-test-preprovisionedpv-cncf" satisfied condition "Succeeded or Failed"
Sep 28 19:44:52.852: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-preprovisionedpv-cncf container test-container-subpath-preprovisionedpv-cncf: <nil>
STEP: delete the pod
Sep 28 19:44:52.945: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cncf to disappear
Sep 28 19:44:52.982: INFO: Pod pod-subpath-test-preprovisionedpv-cncf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cncf
Sep 28 19:44:52.982: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cncf" in namespace "provisioning-1692"
... skipping 60 lines ...
• [SLOW TEST:8.807 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":28,"skipped":196,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:54.265: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":24,"skipped":158,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:53.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 28 19:44:53.230: INFO: Waiting up to 5m0s for pod "pod-21449d17-0558-4ad4-9ccf-e53491e87226" in namespace "emptydir-5705" to be "Succeeded or Failed"
Sep 28 19:44:53.267: INFO: Pod "pod-21449d17-0558-4ad4-9ccf-e53491e87226": Phase="Pending", Reason="", readiness=false. Elapsed: 36.75469ms
Sep 28 19:44:55.305: INFO: Pod "pod-21449d17-0558-4ad4-9ccf-e53491e87226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075080575s
STEP: Saw pod success
Sep 28 19:44:55.305: INFO: Pod "pod-21449d17-0558-4ad4-9ccf-e53491e87226" satisfied condition "Succeeded or Failed"
Sep 28 19:44:55.343: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-21449d17-0558-4ad4-9ccf-e53491e87226 container test-container: <nil>
STEP: delete the pod
Sep 28 19:44:55.422: INFO: Waiting for pod pod-21449d17-0558-4ad4-9ccf-e53491e87226 to disappear
Sep 28 19:44:55.461: INFO: Pod pod-21449d17-0558-4ad4-9ccf-e53491e87226 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:55.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5705" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":158,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:55.555: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-56c98b71-12c6-4c2b-b182-e6e5827a9c63
STEP: Creating a pod to test consume secrets
Sep 28 19:44:54.574: INFO: Waiting up to 5m0s for pod "pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e" in namespace "secrets-100" to be "Succeeded or Failed"
Sep 28 19:44:54.612: INFO: Pod "pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.902187ms
Sep 28 19:44:56.650: INFO: Pod "pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.07598639s
STEP: Saw pod success
Sep 28 19:44:56.650: INFO: Pod "pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e" satisfied condition "Succeeded or Failed"
Sep 28 19:44:56.696: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e container secret-env-test: <nil>
STEP: delete the pod
Sep 28 19:44:56.781: INFO: Waiting for pod pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e to disappear
Sep 28 19:44:56.818: INFO: Pod pod-secrets-97a51cba-b363-49d1-a3da-f6810de38b6e no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:56.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-100" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":204,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 261 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":40,"skipped":252,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:49.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":41,"skipped":252,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:57.824: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":45,"skipped":276,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:44:53.585: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 28 19:44:53.774: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:44:53.813: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kxlh
STEP: Creating a pod to test subpath
Sep 28 19:44:53.854: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kxlh" in namespace "provisioning-6366" to be "Succeeded or Failed"
Sep 28 19:44:53.892: INFO: Pod "pod-subpath-test-inlinevolume-kxlh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.920866ms
Sep 28 19:44:55.930: INFO: Pod "pod-subpath-test-inlinevolume-kxlh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076382501s
Sep 28 19:44:57.969: INFO: Pod "pod-subpath-test-inlinevolume-kxlh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115425124s
STEP: Saw pod success
Sep 28 19:44:57.969: INFO: Pod "pod-subpath-test-inlinevolume-kxlh" satisfied condition "Succeeded or Failed"
Sep 28 19:44:58.019: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-inlinevolume-kxlh container test-container-volume-inlinevolume-kxlh: <nil>
STEP: delete the pod
Sep 28 19:44:58.112: INFO: Waiting for pod pod-subpath-test-inlinevolume-kxlh to disappear
Sep 28 19:44:58.152: INFO: Pod pod-subpath-test-inlinevolume-kxlh no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kxlh
Sep 28 19:44:58.152: INFO: Deleting pod "pod-subpath-test-inlinevolume-kxlh" in namespace "provisioning-6366"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:44:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6366" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":46,"skipped":276,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:44:58.321: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 320 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  34s   default-scheduler  Successfully assigned pod-network-test-4726/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     34s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    33s   kubelet            Created container webserver
  Normal  Started    33s   kubelet            Started container webserver

Sep 28 19:33:27.016: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.4.183&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 28 19:33:27.016: INFO: ...failed...will try again in next pass
Sep 28 19:33:27.016: INFO: Breadth first check of 100.96.3.172 on host 172.20.61.119...
Sep 28 19:33:27.054: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.3.172&port=8081&tries=1'] Namespace:pod-network-test-4726 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:33:27.054: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:33:32.436: INFO: Waiting for responses: map[netserver-2:{}]
Sep 28 19:33:34.438: INFO: 
Output of kubectl describe pod pod-network-test-4726/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  42s   default-scheduler  Successfully assigned pod-network-test-4726/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     42s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    41s   kubelet            Created container webserver
  Normal  Started    41s   kubelet            Started container webserver

Sep 28 19:33:35.557: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.3.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 28 19:33:35.557: INFO: ...failed...will try again in next pass
Sep 28 19:33:35.557: INFO: Breadth first check of 100.96.2.174 on host 172.20.62.211...
Sep 28 19:33:35.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.2.174&port=8081&tries=1'] Namespace:pod-network-test-4726 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:33:35.595: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:33:35.905: INFO: Waiting for responses: map[]
Sep 28 19:33:35.906: INFO: reached 100.96.2.174 after 0/1 tries
Sep 28 19:33:35.906: INFO: Going to retry 2 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m23s  default-scheduler  Successfully assigned pod-network-test-4726/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     6m23s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m22s  kubelet            Created container webserver
  Normal  Started    6m22s  kubelet            Started container webserver

Sep 28 19:39:16.151: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.4.183&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 28 19:39:16.151: INFO: ... Done probing pod [[[ 100.96.4.183 ]]]
Sep 28 19:39:16.151: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-4726/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Sep 28 19:44:56.282: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.3.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 28 19:44:56.282: INFO: ... Done probing pod [[[ 100.96.3.172 ]]]
Sep 28 19:44:56.282: INFO: succeeded at polling 2 out of 4 connections
Sep 28 19:44:56.282: INFO: pod polling failure summary:
Sep 28 19:44:56.282: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.4.183&port=8081&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Sep 28 19:44:56.282: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.1.195:9080/dial?request=hostname&protocol=udp&host=100.96.3.172&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Sep 28 19:44:56.283: FAIL: failed,  2 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001146d80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 260 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 28 19:44:56.283: failed,  2 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":150,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:00.022: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep 28 19:44:55.759: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:44:55.797: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9l99
STEP: Creating a pod to test subpath
Sep 28 19:44:55.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9l99" in namespace "provisioning-2638" to be "Succeeded or Failed"
Sep 28 19:44:55.875: INFO: Pod "pod-subpath-test-inlinevolume-9l99": Phase="Pending", Reason="", readiness=false. Elapsed: 37.08588ms
Sep 28 19:44:57.917: INFO: Pod "pod-subpath-test-inlinevolume-9l99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07902093s
Sep 28 19:44:59.954: INFO: Pod "pod-subpath-test-inlinevolume-9l99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116786637s
STEP: Saw pod success
Sep 28 19:44:59.954: INFO: Pod "pod-subpath-test-inlinevolume-9l99" satisfied condition "Succeeded or Failed"
Sep 28 19:44:59.991: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-inlinevolume-9l99 container test-container-subpath-inlinevolume-9l99: <nil>
STEP: delete the pod
Sep 28 19:45:00.075: INFO: Waiting for pod pod-subpath-test-inlinevolume-9l99 to disappear
Sep 28 19:45:00.112: INFO: Pod pod-subpath-test-inlinevolume-9l99 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9l99
Sep 28 19:45:00.112: INFO: Deleting pod "pod-subpath-test-inlinevolume-9l99" in namespace "provisioning-2638"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:00.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2638" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":26,"skipped":164,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:00.275: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-dd9ca494-fbf2-48bd-8cb7-1220557310d8
STEP: Creating a pod to test consume secrets
Sep 28 19:44:58.309: INFO: Waiting up to 5m0s for pod "pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af" in namespace "secrets-1665" to be "Succeeded or Failed"
Sep 28 19:44:58.346: INFO: Pod "pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af": Phase="Pending", Reason="", readiness=false. Elapsed: 37.099135ms
Sep 28 19:45:00.384: INFO: Pod "pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074940986s
STEP: Saw pod success
Sep 28 19:45:00.384: INFO: Pod "pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af" satisfied condition "Succeeded or Failed"
Sep 28 19:45:00.421: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af container secret-volume-test: <nil>
STEP: delete the pod
Sep 28 19:45:00.506: INFO: Waiting for pod pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af to disappear
Sep 28 19:45:00.548: INFO: Pod pod-secrets-23d5dc7c-a4a0-4f76-859e-c9ba253696af no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1665" for this suite.
STEP: Destroying namespace "secret-namespace-2796" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":265,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 15 lines ...
Sep 28 19:44:55.815: INFO: PersistentVolumeClaim pvc-q2tn8 found but phase is Pending instead of Bound.
Sep 28 19:44:57.853: INFO: PersistentVolumeClaim pvc-q2tn8 found and phase=Bound (2.075073747s)
Sep 28 19:44:57.853: INFO: Waiting up to 3m0s for PersistentVolume local-xrr2w to have phase Bound
Sep 28 19:44:57.890: INFO: PersistentVolume local-xrr2w found and phase=Bound (37.33697ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-7n8p
STEP: Creating a pod to test exec-volume-test
Sep 28 19:44:58.005: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-7n8p" in namespace "volume-6721" to be "Succeeded or Failed"
Sep 28 19:44:58.043: INFO: Pod "exec-volume-test-preprovisionedpv-7n8p": Phase="Pending", Reason="", readiness=false. Elapsed: 38.026959ms
Sep 28 19:45:00.083: INFO: Pod "exec-volume-test-preprovisionedpv-7n8p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078146293s
STEP: Saw pod success
Sep 28 19:45:00.083: INFO: Pod "exec-volume-test-preprovisionedpv-7n8p" satisfied condition "Succeeded or Failed"
Sep 28 19:45:00.120: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod exec-volume-test-preprovisionedpv-7n8p container exec-container-preprovisionedpv-7n8p: <nil>
STEP: delete the pod
Sep 28 19:45:00.207: INFO: Waiting for pod exec-volume-test-preprovisionedpv-7n8p to disappear
Sep 28 19:45:00.244: INFO: Pod exec-volume-test-preprovisionedpv-7n8p no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-7n8p
Sep 28 19:45:00.244: INFO: Deleting pod "exec-volume-test-preprovisionedpv-7n8p" in namespace "volume-6721"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":29,"skipped":251,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:00.953: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Sep 28 19:44:18.161: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-4358jwfhs
STEP: creating a claim
Sep 28 19:44:18.197: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-77jg
STEP: Creating a pod to test exec-volume-test
Sep 28 19:44:18.357: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-77jg" in namespace "volume-4358" to be "Succeeded or Failed"
Sep 28 19:44:18.393: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 35.635072ms
Sep 28 19:44:20.429: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071551215s
Sep 28 19:44:22.466: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107975524s
Sep 28 19:44:24.501: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143912682s
Sep 28 19:44:26.545: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187652849s
Sep 28 19:44:28.581: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22320441s
Sep 28 19:44:30.616: INFO: Pod "exec-volume-test-dynamicpv-77jg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.258910023s
STEP: Saw pod success
Sep 28 19:44:30.617: INFO: Pod "exec-volume-test-dynamicpv-77jg" satisfied condition "Succeeded or Failed"
Sep 28 19:44:30.652: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod exec-volume-test-dynamicpv-77jg container exec-container-dynamicpv-77jg: <nil>
STEP: delete the pod
Sep 28 19:44:30.726: INFO: Waiting for pod exec-volume-test-dynamicpv-77jg to disappear
Sep 28 19:44:30.761: INFO: Pod exec-volume-test-dynamicpv-77jg no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-77jg
Sep 28 19:44:30.761: INFO: Deleting pod "exec-volume-test-dynamicpv-77jg" in namespace "volume-4358"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":30,"skipped":240,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:01.286: INFO: Only supported for providers [gce gke] (not aws)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:01.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6175" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":165,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:45:00.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 28 19:45:00.271: INFO: Waiting up to 5m0s for pod "pod-72241860-ca83-41c9-ba21-74fde740bbc8" in namespace "emptydir-8207" to be "Succeeded or Failed"
Sep 28 19:45:00.309: INFO: Pod "pod-72241860-ca83-41c9-ba21-74fde740bbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 37.746663ms
Sep 28 19:45:02.358: INFO: Pod "pod-72241860-ca83-41c9-ba21-74fde740bbc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.086521779s
STEP: Saw pod success
Sep 28 19:45:02.358: INFO: Pod "pod-72241860-ca83-41c9-ba21-74fde740bbc8" satisfied condition "Succeeded or Failed"
Sep 28 19:45:02.416: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-72241860-ca83-41c9-ba21-74fde740bbc8 container test-container: <nil>
STEP: delete the pod
Sep 28 19:45:02.512: INFO: Waiting for pod pod-72241860-ca83-41c9-ba21-74fde740bbc8 to disappear
Sep 28 19:45:02.550: INFO: Pod pod-72241860-ca83-41c9-ba21-74fde740bbc8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:02.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8207" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":152,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:02.639: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:02.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8994" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":31,"skipped":245,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:02.669: INFO: Only supported for providers [gce gke] (not aws)
... skipping 94 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286

      We don't set fsGroup on block device, skipped.

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":24,"skipped":199,"failed":2,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:43:57.976: INFO: >>> kubeConfig: /root/.kube/config
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":25,"skipped":199,"failed":2,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:04.288: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 201 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":40,"skipped":219,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:04.576: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 251 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:06.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8207" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":41,"skipped":223,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 55 lines ...
Sep 28 19:44:17.423: INFO: PersistentVolumeClaim csi-hostpathr89th found but phase is Pending instead of Bound.
Sep 28 19:44:19.459: INFO: PersistentVolumeClaim csi-hostpathr89th found but phase is Pending instead of Bound.
Sep 28 19:44:21.496: INFO: PersistentVolumeClaim csi-hostpathr89th found but phase is Pending instead of Bound.
Sep 28 19:44:23.533: INFO: PersistentVolumeClaim csi-hostpathr89th found and phase=Bound (8.180477528s)
STEP: Creating pod pod-subpath-test-dynamicpv-4src
STEP: Creating a pod to test subpath
Sep 28 19:44:23.641: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4src" in namespace "provisioning-8941" to be "Succeeded or Failed"
Sep 28 19:44:23.680: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 38.249322ms
Sep 28 19:44:25.715: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073984189s
Sep 28 19:44:27.752: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110548332s
Sep 28 19:44:29.790: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148081215s
Sep 28 19:44:31.826: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184432716s
Sep 28 19:44:33.862: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221021709s
... skipping 4 lines ...
Sep 28 19:44:44.045: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 20.40362478s
Sep 28 19:44:46.081: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 22.439503301s
Sep 28 19:44:48.117: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 24.475452278s
Sep 28 19:44:50.153: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Pending", Reason="", readiness=false. Elapsed: 26.511295566s
Sep 28 19:44:52.190: INFO: Pod "pod-subpath-test-dynamicpv-4src": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.548047488s
STEP: Saw pod success
Sep 28 19:44:52.190: INFO: Pod "pod-subpath-test-dynamicpv-4src" satisfied condition "Succeeded or Failed"
Sep 28 19:44:52.225: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-dynamicpv-4src container test-container-subpath-dynamicpv-4src: <nil>
STEP: delete the pod
Sep 28 19:44:52.305: INFO: Waiting for pod pod-subpath-test-dynamicpv-4src to disappear
Sep 28 19:44:52.340: INFO: Pod pod-subpath-test-dynamicpv-4src no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4src
Sep 28 19:44:52.340: INFO: Deleting pod "pod-subpath-test-dynamicpv-4src" in namespace "provisioning-8941"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":27,"skipped":202,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:11.963: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 138 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Sep 28 19:45:06.412: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1370" to be "Succeeded or Failed"
Sep 28 19:45:06.451: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 38.539154ms
Sep 28 19:45:08.497: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084933103s
Sep 28 19:45:10.536: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123683568s
Sep 28 19:45:12.575: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162908531s
Sep 28 19:45:14.617: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.204279749s
STEP: Saw pod success
Sep 28 19:45:14.617: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 28 19:45:14.657: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 28 19:45:14.747: INFO: Waiting for pod pod-host-path-test to disappear
Sep 28 19:45:14.784: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.692 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":20,"skipped":160,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:45:14.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep 28 19:45:15.115: INFO: Waiting up to 5m0s for pod "var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768" in namespace "var-expansion-3957" to be "Succeeded or Failed"
Sep 28 19:45:15.152: INFO: Pod "var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768": Phase="Pending", Reason="", readiness=false. Elapsed: 37.490385ms
Sep 28 19:45:17.191: INFO: Pod "var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076082403s
Sep 28 19:45:19.229: INFO: Pod "var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114388394s
STEP: Saw pod success
Sep 28 19:45:19.229: INFO: Pod "var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768" satisfied condition "Succeeded or Failed"
Sep 28 19:45:19.267: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768 container dapi-container: <nil>
STEP: delete the pod
Sep 28 19:45:19.350: INFO: Waiting for pod var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768 to disappear
Sep 28 19:45:19.388: INFO: Pod var-expansion-a4770112-1882-4f2f-8b7e-5b16b25fd768 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 10 lines ...
Sep 28 19:45:00.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep 28 19:45:01.163: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 28 19:45:01.240: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-91" in namespace "provisioning-91" to be "Succeeded or Failed"
Sep 28 19:45:01.277: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Pending", Reason="", readiness=false. Elapsed: 37.350161ms
Sep 28 19:45:03.315: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075241146s
Sep 28 19:45:05.353: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11274973s
Sep 28 19:45:07.391: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150908123s
STEP: Saw pod success
Sep 28 19:45:07.391: INFO: Pod "hostpath-symlink-prep-provisioning-91" satisfied condition "Succeeded or Failed"
Sep 28 19:45:07.391: INFO: Deleting pod "hostpath-symlink-prep-provisioning-91" in namespace "provisioning-91"
Sep 28 19:45:07.432: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-91" to be fully deleted
Sep 28 19:45:07.469: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-bh76
STEP: Creating a pod to test subpath
Sep 28 19:45:07.508: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-bh76" in namespace "provisioning-91" to be "Succeeded or Failed"
Sep 28 19:45:07.547: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Pending", Reason="", readiness=false. Elapsed: 38.909418ms
Sep 28 19:45:09.584: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076728271s
Sep 28 19:45:11.622: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114305002s
Sep 28 19:45:13.660: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152042812s
Sep 28 19:45:15.698: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190102634s
Sep 28 19:45:17.736: INFO: Pod "pod-subpath-test-inlinevolume-bh76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228691739s
STEP: Saw pod success
Sep 28 19:45:17.736: INFO: Pod "pod-subpath-test-inlinevolume-bh76" satisfied condition "Succeeded or Failed"
Sep 28 19:45:17.774: INFO: Trying to get logs from node ip-172-20-50-189.ec2.internal pod pod-subpath-test-inlinevolume-bh76 container test-container-subpath-inlinevolume-bh76: <nil>
STEP: delete the pod
Sep 28 19:45:17.866: INFO: Waiting for pod pod-subpath-test-inlinevolume-bh76 to disappear
Sep 28 19:45:17.903: INFO: Pod pod-subpath-test-inlinevolume-bh76 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-bh76
Sep 28 19:45:17.903: INFO: Deleting pod "pod-subpath-test-inlinevolume-bh76" in namespace "provisioning-91"
STEP: Deleting pod
Sep 28 19:45:17.940: INFO: Deleting pod "pod-subpath-test-inlinevolume-bh76" in namespace "provisioning-91"
Sep 28 19:45:18.015: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-91" in namespace "provisioning-91" to be "Succeeded or Failed"
Sep 28 19:45:18.053: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Pending", Reason="", readiness=false. Elapsed: 37.058314ms
Sep 28 19:45:20.091: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075255037s
Sep 28 19:45:22.128: INFO: Pod "hostpath-symlink-prep-provisioning-91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112999552s
STEP: Saw pod success
Sep 28 19:45:22.129: INFO: Pod "hostpath-symlink-prep-provisioning-91" satisfied condition "Succeeded or Failed"
Sep 28 19:45:22.129: INFO: Deleting pod "hostpath-symlink-prep-provisioning-91" in namespace "provisioning-91"
Sep 28 19:45:22.171: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-91" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-91" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":30,"skipped":260,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:22.299: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":21,"skipped":163,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:45:19.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:22.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7670" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":22,"skipped":163,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:22.779: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":29,"skipped":190,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 72 lines ...
Sep 28 19:45:01.992: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-87jlnm5
STEP: creating a claim
Sep 28 19:45:02.031: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hmjr
STEP: Creating a pod to test subpath
Sep 28 19:45:02.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hmjr" in namespace "provisioning-87" to be "Succeeded or Failed"
Sep 28 19:45:02.189: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 42.223722ms
Sep 28 19:45:04.232: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084555953s
Sep 28 19:45:06.270: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123401182s
Sep 28 19:45:08.315: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167526641s
Sep 28 19:45:10.352: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204984641s
Sep 28 19:45:12.390: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.242912188s
Sep 28 19:45:14.427: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.280334559s
Sep 28 19:45:16.465: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.317918629s
Sep 28 19:45:18.503: INFO: Pod "pod-subpath-test-dynamicpv-hmjr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.356342232s
STEP: Saw pod success
Sep 28 19:45:18.504: INFO: Pod "pod-subpath-test-dynamicpv-hmjr" satisfied condition "Succeeded or Failed"
Sep 28 19:45:18.541: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-dynamicpv-hmjr container test-container-subpath-dynamicpv-hmjr: <nil>
STEP: delete the pod
Sep 28 19:45:18.624: INFO: Waiting for pod pod-subpath-test-dynamicpv-hmjr to disappear
Sep 28 19:45:18.661: INFO: Pod pod-subpath-test-dynamicpv-hmjr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hmjr
Sep 28 19:45:18.661: INFO: Deleting pod "pod-subpath-test-dynamicpv-hmjr" in namespace "provisioning-87"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":28,"skipped":169,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:29.122: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 51 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep 28 19:45:04.495: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 28 19:45:04.495: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-b6j4
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:45:04.533: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-b6j4" in namespace "provisioning-9930" to be "Succeeded or Failed"
Sep 28 19:45:04.567: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.761979ms
Sep 28 19:45:06.605: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072925833s
Sep 28 19:45:08.641: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108589885s
Sep 28 19:45:10.677: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 6.144202911s
Sep 28 19:45:12.713: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 8.180908034s
Sep 28 19:45:14.751: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 10.218461228s
... skipping 2 lines ...
Sep 28 19:45:20.861: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 16.328041173s
Sep 28 19:45:22.897: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 18.364247981s
Sep 28 19:45:24.933: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 20.400749382s
Sep 28 19:45:26.970: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Running", Reason="", readiness=true. Elapsed: 22.437196538s
Sep 28 19:45:29.006: INFO: Pod "pod-subpath-test-inlinevolume-b6j4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.472986496s
STEP: Saw pod success
Sep 28 19:45:29.006: INFO: Pod "pod-subpath-test-inlinevolume-b6j4" satisfied condition "Succeeded or Failed"
Sep 28 19:45:29.042: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-inlinevolume-b6j4 container test-container-subpath-inlinevolume-b6j4: <nil>
STEP: delete the pod
Sep 28 19:45:29.146: INFO: Waiting for pod pod-subpath-test-inlinevolume-b6j4 to disappear
Sep 28 19:45:29.181: INFO: Pod pod-subpath-test-inlinevolume-b6j4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-b6j4
Sep 28 19:45:29.181: INFO: Deleting pod "pod-subpath-test-inlinevolume-b6j4" in namespace "provisioning-9930"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":47,"skipped":332,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:45:06.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":26,"skipped":207,"failed":2,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:29.348: INFO: Only supported for providers [gce gke] (not aws)
... skipping 201 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:29.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-5540" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":48,"skipped":346,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:29.921: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
Sep 28 19:43:33.542: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1864 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.67.174.44:80 2>&1 || true; echo; done'
Sep 28 19:45:27.310: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.67.174.44:80\n+ echo\n"
Sep 28 19:45:27.310: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-gxzrq\n"
Sep 28 19:45:27.310: INFO: Unable to reach the following endpoints of service 100.67.174.44: map[up-down-1-kj4kk:{} up-down-1-n8pfs:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1864
STEP: Deleting pod verify-service-up-exec-pod-kjfxh in namespace services-1864
Sep 28 19:45:32.393: FAIL: Unexpected error:
    <*errors.errorString | 0xc002e3e030>: {
        s: "service verification failed for: 100.67.174.44\nexpected [up-down-1-gxzrq up-down-1-kj4kk up-down-1-n8pfs]\nreceived [up-down-1-gxzrq wget: download timed out]",
    }
    service verification failed for: 100.67.174.44
    expected [up-down-1-gxzrq up-down-1-kj4kk up-down-1-n8pfs]
    received [up-down-1-gxzrq wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.8()
... skipping 297 lines ...
• Failure [350.672 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015

  Sep 28 19:45:32.393: Unexpected error:
      <*errors.errorString | 0xc002e3e030>: {
          s: "service verification failed for: 100.67.174.44\nexpected [up-down-1-gxzrq up-down-1-kj4kk up-down-1-n8pfs]\nreceived [up-down-1-gxzrq wget: download timed out]",
      }
      service verification failed for: 100.67.174.44
      expected [up-down-1-gxzrq up-down-1-kj4kk up-down-1-n8pfs]
      received [up-down-1-gxzrq wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1031
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":14,"skipped":96,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 122 lines ...
Sep 28 19:45:04.152: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-98wzx] to have phase Bound
Sep 28 19:45:04.189: INFO: PersistentVolumeClaim pvc-98wzx found and phase=Bound (36.83325ms)
STEP: Deleting the previously created pod
Sep 28 19:45:14.384: INFO: Deleting pod "pvc-volume-tester-kn2js" in namespace "csi-mock-volumes-7540"
Sep 28 19:45:14.422: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kn2js" to be fully deleted
STEP: Checking CSI driver logs
Sep 28 19:45:22.536: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/40db820e-96f0-4455-b6ad-342cfe9e1864/volumes/kubernetes.io~csi/pvc-f8170395-f15d-4fb9-9cf4-9f5e2b21d169/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-kn2js
Sep 28 19:45:22.536: INFO: Deleting pod "pvc-volume-tester-kn2js" in namespace "csi-mock-volumes-7540"
STEP: Deleting claim pvc-98wzx
Sep 28 19:45:22.647: INFO: Waiting up to 2m0s for PersistentVolume pvc-f8170395-f15d-4fb9-9cf4-9f5e2b21d169 to get deleted
Sep 28 19:45:22.687: INFO: PersistentVolume pvc-f8170395-f15d-4fb9-9cf4-9f5e2b21d169 was removed
STEP: Deleting storageclass csi-mock-volumes-7540-scjbtmf
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":30,"skipped":233,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:36.092: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
Sep 28 19:45:26.071: INFO: PersistentVolumeClaim pvc-b6wts found but phase is Pending instead of Bound.
Sep 28 19:45:28.107: INFO: PersistentVolumeClaim pvc-b6wts found and phase=Bound (14.313493443s)
Sep 28 19:45:28.107: INFO: Waiting up to 3m0s for PersistentVolume local-splt4 to have phase Bound
Sep 28 19:45:28.142: INFO: PersistentVolume local-splt4 found and phase=Bound (35.275704ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-45hl
STEP: Creating a pod to test subpath
Sep 28 19:45:28.253: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-45hl" in namespace "provisioning-8091" to be "Succeeded or Failed"
Sep 28 19:45:28.288: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl": Phase="Pending", Reason="", readiness=false. Elapsed: 35.265312ms
Sep 28 19:45:30.325: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071812011s
Sep 28 19:45:32.361: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108147185s
Sep 28 19:45:34.399: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145792894s
Sep 28 19:45:36.435: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.182066376s
STEP: Saw pod success
Sep 28 19:45:36.435: INFO: Pod "pod-subpath-test-preprovisionedpv-45hl" satisfied condition "Succeeded or Failed"
Sep 28 19:45:36.470: INFO: Trying to get logs from node ip-172-20-61-119.ec2.internal pod pod-subpath-test-preprovisionedpv-45hl container test-container-subpath-preprovisionedpv-45hl: <nil>
STEP: delete the pod
Sep 28 19:45:36.546: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-45hl to disappear
Sep 28 19:45:36.581: INFO: Pod pod-subpath-test-preprovisionedpv-45hl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-45hl
Sep 28 19:45:36.581: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-45hl" in namespace "provisioning-8091"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":42,"skipped":224,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:38.106: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:40.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4146" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":43,"skipped":233,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:40.477: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":49,"skipped":348,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 28 19:45:42.830: INFO: Only supported for providers [openstack] (not aws)
... skipping 84 lines ...
------------------------------
SSSSSS
------------------------------
Sep 28 19:45:42.899: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":15,"skipped":105,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:45:35.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:45:48.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1865" for this suite.


• [SLOW TEST:12.358 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":16,"skipped":105,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
Sep 28 19:45:48.189: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 28 19:45:25.092: INFO: PersistentVolumeClaim pvc-znkms found but phase is Pending instead of Bound.
Sep 28 19:45:27.129: INFO: PersistentVolumeClaim pvc-znkms found and phase=Bound (2.074517988s)
Sep 28 19:45:27.129: INFO: Waiting up to 3m0s for PersistentVolume local-5nlfh to have phase Bound
Sep 28 19:45:27.167: INFO: PersistentVolume local-5nlfh found and phase=Bound (37.194494ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-28l4
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:45:27.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-28l4" in namespace "provisioning-217" to be "Succeeded or Failed"
Sep 28 19:45:27.317: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.452774ms
Sep 28 19:45:29.357: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077454334s
Sep 28 19:45:31.395: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 4.115307813s
Sep 28 19:45:33.433: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 6.153085425s
Sep 28 19:45:35.473: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 8.193132183s
Sep 28 19:45:37.512: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 10.231656759s
Sep 28 19:45:39.551: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 12.270595804s
Sep 28 19:45:41.589: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 14.308999285s
Sep 28 19:45:43.628: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 16.347781712s
Sep 28 19:45:45.666: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 18.386287124s
Sep 28 19:45:47.704: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Running", Reason="", readiness=true. Elapsed: 20.42430314s
Sep 28 19:45:49.742: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.46247231s
STEP: Saw pod success
Sep 28 19:45:49.743: INFO: Pod "pod-subpath-test-preprovisionedpv-28l4" satisfied condition "Succeeded or Failed"
Sep 28 19:45:49.780: INFO: Trying to get logs from node ip-172-20-62-211.ec2.internal pod pod-subpath-test-preprovisionedpv-28l4 container test-container-subpath-preprovisionedpv-28l4: <nil>
STEP: delete the pod
Sep 28 19:45:49.861: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-28l4 to disappear
Sep 28 19:45:49.898: INFO: Pod pod-subpath-test-preprovisionedpv-28l4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-28l4
Sep 28 19:45:49.899: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-28l4" in namespace "provisioning-217"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":31,"skipped":266,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
Sep 28 19:45:50.794: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 72 lines ...
Sep 28 19:27:04.774: INFO: Unable to read wheezy_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:27:34.816: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:28:04.863: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:28:34.904: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:29:04.950: INFO: Unable to read jessie_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:29:34.992: INFO: Unable to read jessie_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:29:34.992: INFO: Lookups using dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:30:10.031: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:30:40.071: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:31:10.111: INFO: Unable to read wheezy_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:31:40.150: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:32:10.188: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:32:40.226: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:33:10.265: INFO: Unable to read jessie_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:33:40.303: INFO: Unable to read jessie_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:33:40.303: INFO: Lookups using dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:34:15.032: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:34:45.071: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:35:15.114: INFO: Unable to read wheezy_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:35:45.157: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:36:15.202: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:36:45.242: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:37:15.281: INFO: Unable to read jessie_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:37:45.320: INFO: Unable to read jessie_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:37:45.320: INFO: Lookups using dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:38:20.033: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:38:50.073: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:39:20.112: INFO: Unable to read wheezy_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:39:50.150: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:40:20.189: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:40:50.245: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:41:20.283: INFO: Unable to read jessie_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:41:50.322: INFO: Unable to read jessie_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:41:50.322: INFO: Lookups using dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:42:20.361: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:42:50.400: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:43:20.439: INFO: Unable to read wheezy_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:43:50.478: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:44:20.517: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:44:50.556: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:45:20.595: INFO: Unable to read jessie_udp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:45:50.634: INFO: Unable to read jessie_tcp@PodARecord from pod dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6: the server is currently unable to handle the request (get pods dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6)
Sep 28 19:45:50.634: INFO: Lookups using dns-665/dns-test-e41e8cf2-c40c-4205-b06f-4cdc2ad9b1f6 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-665.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:45:50.634: FAIL: Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 251 lines ...
• Failure [1234.866 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:45:50.634: Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":29,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Sep 28 19:45:53.185: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":43,"skipped":473,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
Sep 28 19:45:54.370: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:20.414 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":236,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
Sep 28 19:45:56.524: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Sep 28 19:44:57.837: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6377qdhf9
STEP: creating a claim
Sep 28 19:44:57.875: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-b28l
STEP: Creating a pod to test atomic-volume-subpath
Sep 28 19:44:58.005: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-b28l" in namespace "provisioning-6377" to be "Succeeded or Failed"
Sep 28 19:44:58.042: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 37.325678ms
Sep 28 19:45:00.082: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07705843s
Sep 28 19:45:02.119: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114302669s
Sep 28 19:45:04.157: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151827824s
Sep 28 19:45:06.195: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190169682s
Sep 28 19:45:08.241: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236404079s
... skipping 9 lines ...
Sep 28 19:45:28.623: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Running", Reason="", readiness=true. Elapsed: 30.617731118s
Sep 28 19:45:30.661: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Running", Reason="", readiness=true. Elapsed: 32.655563918s
Sep 28 19:45:32.698: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Running", Reason="", readiness=true. Elapsed: 34.692615742s
Sep 28 19:45:34.736: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Running", Reason="", readiness=true. Elapsed: 36.73073453s
Sep 28 19:45:36.774: INFO: Pod "pod-subpath-test-dynamicpv-b28l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.768550152s
STEP: Saw pod success
Sep 28 19:45:36.774: INFO: Pod "pod-subpath-test-dynamicpv-b28l" satisfied condition "Succeeded or Failed"
Sep 28 19:45:36.810: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-dynamicpv-b28l container test-container-subpath-dynamicpv-b28l: <nil>
STEP: delete the pod
Sep 28 19:45:36.893: INFO: Waiting for pod pod-subpath-test-dynamicpv-b28l to disappear
Sep 28 19:45:36.930: INFO: Pod pod-subpath-test-dynamicpv-b28l no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-b28l
Sep 28 19:45:36.930: INFO: Deleting pod "pod-subpath-test-dynamicpv-b28l" in namespace "provisioning-6377"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":40,"skipped":240,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}
Sep 28 19:45:57.425: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":29,"skipped":184,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
Sep 28 19:46:02.625: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3956-crds.webhook.example.com via the AdmissionRegistration API
Sep 28 19:45:17.380: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:45:27.558: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:45:37.659: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:45:47.759: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:45:57.837: INFO: Waiting for webhook configuration to be ready...
Sep 28 19:45:57.837: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 407 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:45:57.837: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":46,"skipped":287,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
Sep 28 19:46:02.679: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
Sep 28 19:45:04.326: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2870 to register on node ip-172-20-50-189.ec2.internal
STEP: Creating pod
Sep 28 19:45:14.008: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 28 19:45:14.045: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-k9s6m] to have phase Bound
Sep 28 19:45:14.080: INFO: PersistentVolumeClaim pvc-k9s6m found and phase=Bound (34.786473ms)
STEP: checking for CSIInlineVolumes feature
Sep 28 19:45:22.345: INFO: Error getting logs for pod inline-volume-gr9rk: the server rejected our request for an unknown reason (get pods inline-volume-gr9rk)
Sep 28 19:45:22.417: INFO: Deleting pod "inline-volume-gr9rk" in namespace "csi-mock-volumes-2870"
Sep 28 19:45:22.454: INFO: Wait up to 5m0s for pod "inline-volume-gr9rk" to be fully deleted
STEP: Deleting the previously created pod
Sep 28 19:45:36.528: INFO: Deleting pod "pvc-volume-tester-q2f8b" in namespace "csi-mock-volumes-2870"
Sep 28 19:45:36.564: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q2f8b" to be fully deleted
STEP: Checking CSI driver logs
Sep 28 19:45:44.673: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 28 19:45:44.674: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Sep 28 19:45:44.674: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-q2f8b
Sep 28 19:45:44.674: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2870
Sep 28 19:45:44.674: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: d155cc8d-a4ea-4275-8027-4d9dde63d60e
Sep 28 19:45:44.674: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d155cc8d-a4ea-4275-8027-4d9dde63d60e/volumes/kubernetes.io~csi/pvc-f623a69d-424e-4b8b-9f13-19725e6e1eec/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-q2f8b
Sep 28 19:45:44.674: INFO: Deleting pod "pvc-volume-tester-q2f8b" in namespace "csi-mock-volumes-2870"
STEP: Deleting claim pvc-k9s6m
Sep 28 19:45:44.783: INFO: Waiting up to 2m0s for PersistentVolume pvc-f623a69d-424e-4b8b-9f13-19725e6e1eec to get deleted
Sep 28 19:45:44.819: INFO: PersistentVolume pvc-f623a69d-424e-4b8b-9f13-19725e6e1eec found and phase=Released (35.650119ms)
Sep 28 19:45:46.854: INFO: PersistentVolume pvc-f623a69d-424e-4b8b-9f13-19725e6e1eec was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":32,"skipped":253,"failed":2,"failures":["[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
Sep 28 19:46:06.298: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 60 lines ...
Sep 28 19:46:00.224: INFO: Pod aws-client still exists
Sep 28 19:46:02.189: INFO: Waiting for pod aws-client to disappear
Sep 28 19:46:02.235: INFO: Pod aws-client still exists
Sep 28 19:46:04.189: INFO: Waiting for pod aws-client to disappear
Sep 28 19:46:04.225: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 28 19:46:04.442: INFO: Couldn't delete PD "aws://us-east-1a/vol-0998ecf2e9c20d1f5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0998ecf2e9c20d1f5 is currently attached to i-0d18796061afbe613
	status code: 400, request id: 788d9f00-ba08-4831-b88e-60c8e656bff5
Sep 28 19:46:09.862: INFO: Successfully deleted PD "aws://us-east-1a/vol-0998ecf2e9c20d1f5".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 28 19:46:09.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7538" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":28,"skipped":222,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
Sep 28 19:46:09.942: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":23,"skipped":174,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Sep 28 19:46:25.733: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 54 lines ...
Sep 28 19:45:42.242: INFO: PersistentVolumeClaim csi-hostpath44dg8 found but phase is Pending instead of Bound.
Sep 28 19:45:44.279: INFO: PersistentVolumeClaim csi-hostpath44dg8 found but phase is Pending instead of Bound.
Sep 28 19:45:46.314: INFO: PersistentVolumeClaim csi-hostpath44dg8 found but phase is Pending instead of Bound.
Sep 28 19:45:48.350: INFO: PersistentVolumeClaim csi-hostpath44dg8 found and phase=Bound (6.144124504s)
STEP: Creating pod pod-subpath-test-dynamicpv-d6b5
STEP: Creating a pod to test subpath
Sep 28 19:45:48.459: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-d6b5" in namespace "provisioning-6129" to be "Succeeded or Failed"
Sep 28 19:45:48.494: INFO: Pod "pod-subpath-test-dynamicpv-d6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.110112ms
Sep 28 19:45:50.530: INFO: Pod "pod-subpath-test-dynamicpv-d6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070713091s
Sep 28 19:45:52.565: INFO: Pod "pod-subpath-test-dynamicpv-d6b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106331851s
Sep 28 19:45:54.602: INFO: Pod "pod-subpath-test-dynamicpv-d6b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143163115s
STEP: Saw pod success
Sep 28 19:45:54.602: INFO: Pod "pod-subpath-test-dynamicpv-d6b5" satisfied condition "Succeeded or Failed"
Sep 28 19:45:54.637: INFO: Trying to get logs from node ip-172-20-36-158.ec2.internal pod pod-subpath-test-dynamicpv-d6b5 container test-container-volume-dynamicpv-d6b5: <nil>
STEP: delete the pod
Sep 28 19:45:54.717: INFO: Waiting for pod pod-subpath-test-dynamicpv-d6b5 to disappear
Sep 28 19:45:54.752: INFO: Pod pod-subpath-test-dynamicpv-d6b5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-d6b5
Sep 28 19:45:54.752: INFO: Deleting pod "pod-subpath-test-dynamicpv-d6b5" in namespace "provisioning-6129"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":44,"skipped":237,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}
Sep 28 19:46:30.376: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 28 19:41:12.658: INFO: PersistentVolumeClaim pvc-n78x8 found and phase=Bound (6.15380071s)
Sep 28 19:41:12.658: INFO: Waiting up to 3m0s for PersistentVolume nfs-mrjfp to have phase Bound
Sep 28 19:41:12.696: INFO: PersistentVolume nfs-mrjfp found and phase=Bound (37.592088ms)
STEP: Checking pod has write access to PersistentVolume
Sep 28 19:41:12.771: INFO: Creating nfs test pod
Sep 28 19:41:12.810: INFO: Pod should terminate with exitcode 0 (success)
Sep 28 19:41:12.810: INFO: Waiting up to 5m0s for pod "pvc-tester-mkmlv" in namespace "pv-7298" to be "Succeeded or Failed"
Sep 28 19:41:12.847: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 37.414358ms
Sep 28 19:41:14.886: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075801512s
Sep 28 19:41:16.925: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115171371s
Sep 28 19:41:18.964: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154382502s
Sep 28 19:41:21.003: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193119522s
Sep 28 19:41:23.042: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.231876926s
... skipping 138 lines ...
Sep 28 19:46:06.544: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.734209148s
Sep 28 19:46:08.583: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.77320465s
Sep 28 19:46:10.622: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.812030529s
Sep 28 19:46:12.661: INFO: Pod "pvc-tester-mkmlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.850955144s
Sep 28 19:46:14.661: INFO: Deleting pod "pvc-tester-mkmlv" in namespace "pv-7298"
Sep 28 19:46:14.700: INFO: Wait up to 5m0s for pod "pvc-tester-mkmlv" to be fully deleted
Sep 28 19:46:22.777: FAIL: Unexpected error:
    <*errors.errorString | 0xc00451ccc0>: {
        s: "pod \"pvc-tester-mkmlv\" did not exit with Success: pod \"pvc-tester-mkmlv\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-mkmlv\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-mkmlv" did not exit with Success: pod "pvc-tester-mkmlv" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-mkmlv" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc0023731e0, 0x779f8f8, 0xc001005760, 0xc003ea29b9, 0x7, 0xc003acb680, 0xc0042b81c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.3()
... skipping 23 lines ...
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:41:04 +0000 UTC - event for nfs-server: {kubelet ip-172-20-62-211.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:41:04 +0000 UTC - event for nfs-server: {kubelet ip-172-20-62-211.ec2.internal} Created: Created container nfs-server
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:41:04 +0000 UTC - event for nfs-server: {kubelet ip-172-20-62-211.ec2.internal} Started: Started container nfs-server
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:41:06 +0000 UTC - event for pvc-n78x8: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:41:12 +0000 UTC - event for pvc-tester-mkmlv: {default-scheduler } Scheduled: Successfully assigned pv-7298/pvc-tester-mkmlv to ip-172-20-61-119.ec2.internal
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:43:15 +0000 UTC - event for pvc-tester-mkmlv: {kubelet ip-172-20-61-119.ec2.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-t2wrk]: timed out waiting for the condition
Sep 28 19:46:37.007: INFO: At 2021-09-28 19:44:15 +0000 UTC - event for pvc-tester-mkmlv: {kubelet ip-172-20-61-119.ec2.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-mrjfp" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.36:/exports /var/lib/kubelet/pods/f5533fbf-9ea3-455a-aaf5-94e8c4b71fcf/volumes/kubernetes.io~nfs/nfs-mrjfp
Output: mount.nfs: Connection timed out

Sep 28 19:46:37.007: INFO: At 2021-09-28 19:46:22 +0000 UTC - event for nfs-server: {kubelet ip-172-20-62-211.ec2.internal} Killing: Stopping container nfs-server
Sep 28 19:46:37.044: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178

      Sep 28 19:46:22.777: Unexpected error:
          <*errors.errorString | 0xc00451ccc0>: {
              s: "pod \"pvc-tester-mkmlv\" did not exit with Success: pod \"pvc-tester-mkmlv\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-mkmlv\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-mkmlv" did not exit with Success: pod "pvc-tester-mkmlv" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-mkmlv" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":24,"skipped":203,"failed":4,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access"]}
Sep 28 19:46:39.002: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating replication controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892
Sep 28 19:43:49.624: INFO: Pod name my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Found 1 pods out of 1
Sep 28 19:43:49.624: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892" are running
Sep 28 19:43:51.700: INFO: Pod "my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-28 19:43:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-28 19:43:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-28 19:43:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-09-28 19:43:49 +0000 UTC Reason: Message:}])
Sep 28 19:43:51.700: INFO: Trying to dial the pod
Sep 28 19:44:26.816: INFO: Controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Failed to GET from replica 1 [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs]: the server is currently unable to handle the request (get pods my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.36.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004244af8), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036d6aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc006340d9d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:45:01.810: INFO: Controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Failed to GET from replica 1 [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs]: the server is currently unable to handle the request (get pods my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.36.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004244af8), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036d6aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc006340d9d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:45:36.812: INFO: Controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Failed to GET from replica 1 [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs]: the server is currently unable to handle the request (get pods my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.36.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004244af8), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036d6aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc006340d9d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:46:11.811: INFO: Controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Failed to GET from replica 1 [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs]: the server is currently unable to handle the request (get pods my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.36.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004244af8), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036d6aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc006340d9d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:46:41.922: INFO: Controller my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892: Failed to GET from replica 1 [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs]: the server is currently unable to handle the request (get pods my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892-nvpcs)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768455029, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.36.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004244af8), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-03c0b464-9bb1-4f18-8fd6-6854870bc892", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036d6aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc006340d9d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:46:41.922: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003a11980)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 167 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:46:41.922: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65
------------------------------
{"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":28,"skipped":160,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
Sep 28 19:46:43.876: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
I0928 19:44:26.484022    5386 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7656, replica count: 2
I0928 19:44:29.535357    5386 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0928 19:44:32.536647    5386 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 28 19:44:32.536: INFO: Creating new exec pod
Sep 28 19:44:35.692: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:44:41.261: INFO: rc: 1
Sep 28 19:44:41.261: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:44:42.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:44:47.766: INFO: rc: 1
Sep 28 19:44:47.766: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:44:48.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:44:53.816: INFO: rc: 1
Sep 28 19:44:53.816: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:44:54.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:44:59.793: INFO: rc: 1
Sep 28 19:44:59.793: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:00.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:05.806: INFO: rc: 1
Sep 28 19:45:05.806: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:06.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:11.781: INFO: rc: 1
Sep 28 19:45:11.781: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:12.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:17.768: INFO: rc: 1
Sep 28 19:45:17.768: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:18.263: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:23.773: INFO: rc: 1
Sep 28 19:45:23.773: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:24.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:29.786: INFO: rc: 1
Sep 28 19:45:29.786: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:30.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:35.860: INFO: rc: 1
Sep 28 19:45:35.861: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:36.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:41.971: INFO: rc: 1
Sep 28 19:45:41.971: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:42.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:48.121: INFO: rc: 1
Sep 28 19:45:48.121: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:48.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:53.827: INFO: rc: 1
Sep 28 19:45:53.827: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:45:54.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:45:59.805: INFO: rc: 1
Sep 28 19:45:59.805: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:00.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:05.769: INFO: rc: 1
Sep 28 19:46:05.769: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:06.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:11.790: INFO: rc: 1
Sep 28 19:46:11.790: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:12.261: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:17.782: INFO: rc: 1
Sep 28 19:46:17.782: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:18.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:23.798: INFO: rc: 1
Sep 28 19:46:23.798: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:24.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:29.804: INFO: rc: 1
Sep 28 19:46:29.805: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:30.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:35.799: INFO: rc: 1
Sep 28 19:46:35.799: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:36.262: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:42.006: INFO: rc: 1
Sep 28 19:46:42.006: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:42.006: INFO: Running '/tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 28 19:46:47.532: INFO: rc: 1
Sep 28 19:46:47.532: INFO: Service reachability failing with error: error running /tmp/kubectl2271960906/kubectl --server=https://api.e2e-b08e534318-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7656 exec execpodjmpq2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 28 19:46:47.532: FAIL: Unexpected error:
    <*errors.errorString | 0xc00370e0b0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 182 lines ...
• Failure [143.413 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:46:47.532: Unexpected error:
      <*errors.errorString | 0xc00370e0b0>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":20,"skipped":187,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
Sep 28 19:46:49.556: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":43,"skipped":268,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Sep 28 19:47:13.386: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 278 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  35s   default-scheduler  Successfully assigned pod-network-test-4355/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     35s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    35s   kubelet            Created container webserver
  Normal  Started    35s   kubelet            Started container webserver

Sep 28 19:35:59.584: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.4.211&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 28 19:35:59.584: INFO: ...failed...will try again in next pass
Sep 28 19:35:59.584: INFO: Breadth first check of 100.96.3.206 on host 172.20.61.119...
Sep 28 19:35:59.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.3.206&port=8080&tries=1'] Namespace:pod-network-test-4355 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:35:59.621: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:36:04.951: INFO: Waiting for responses: map[netserver-2:{}]
Sep 28 19:36:06.951: INFO: 
Output of kubectl describe pod pod-network-test-4355/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned pod-network-test-4355/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     44s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    44s   kubelet            Created container webserver
  Normal  Started    44s   kubelet            Started container webserver

Sep 28 19:36:08.092: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.3.206&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 28 19:36:08.092: INFO: ...failed...will try again in next pass
Sep 28 19:36:08.092: INFO: Breadth first check of 100.96.2.197 on host 172.20.62.211...
Sep 28 19:36:08.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.2.197&port=8080&tries=1'] Namespace:pod-network-test-4355 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Sep 28 19:36:08.131: INFO: >>> kubeConfig: /root/.kube/config
Sep 28 19:36:08.462: INFO: Waiting for responses: map[]
Sep 28 19:36:08.462: INFO: reached 100.96.2.197 after 0/1 tries
Sep 28 19:36:08.462: INFO: Going to retry 2 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m25s  default-scheduler  Successfully assigned pod-network-test-4355/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     6m25s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m25s  kubelet            Created container webserver
  Normal  Started    6m25s  kubelet            Started container webserver

Sep 28 19:41:49.294: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.4.211&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Sep 28 19:41:49.294: INFO: ... Done probing pod [[[ 100.96.4.211 ]]]
Sep 28 19:41:49.294: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-4355/netserver-3 to ip-172-20-62-211.ec2.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Sep 28 19:47:29.983: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.3.206&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Sep 28 19:47:29.983: INFO: ... Done probing pod [[[ 100.96.3.206 ]]]
Sep 28 19:47:29.983: INFO: succeeded at polling 2 out of 4 connections
Sep 28 19:47:29.983: INFO: pod polling failure summary:
Sep 28 19:47:29.983: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.4.211&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Sep 28 19:47:29.983: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.198:9080/dial?request=hostname&protocol=http&host=100.96.3.206&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Sep 28 19:47:29.983: FAIL: failed,  2 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003371800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 160 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 28 19:47:29.983: failed,  2 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":164,"failed":2,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Sep 28 19:47:32.003: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Sep 28 19:31:33.144: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:32:03.182: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:32:33.224: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:33:03.262: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:33:33.301: INFO: Unable to read jessie_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:34:03.342: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:34:03.342: INFO: Lookups using dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:34:38.423: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:35:08.463: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:35:38.502: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:36:08.543: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:36:38.581: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:37:08.622: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:37:38.661: INFO: Unable to read jessie_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:38:08.701: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:38:08.701: INFO: Lookups using dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:38:43.387: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:39:13.425: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:39:43.464: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:40:13.506: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:40:43.545: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:41:13.583: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:41:43.622: INFO: Unable to read jessie_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:42:13.660: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:42:13.660: INFO: Lookups using dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:42:48.384: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:43:18.423: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:43:48.463: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:44:18.506: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:44:48.544: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:45:18.584: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:45:48.626: INFO: Unable to read jessie_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:46:18.664: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:46:18.665: INFO: Lookups using dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:46:48.703: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:47:18.741: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:47:48.780: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:48:18.819: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:48:48.858: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:49:18.896: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:49:48.935: INFO: Unable to read jessie_udp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:50:18.975: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: the server is currently unable to handle the request (get pods dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5)
Sep 28 19:50:18.975: INFO: Lookups using dns-1808/dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-1808.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:50:18.976: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 31 lines ...
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:29:58 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4" in 8.306820634s
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:29:58 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Created: Created container jessie-querier
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:29:58 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Started: Started container jessie-querier
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:50:19 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Killing: Stopping container webserver
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:50:19 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Killing: Stopping container jessie-querier
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:50:19 +0000 UTC - event for dns-test-c3dda7cd-58d5-49d9-b33d-4330f8aff5b5: {kubelet ip-172-20-61-119.ec2.internal} Killing: Stopping container querier
Sep 28 19:50:19.110: INFO: At 2021-09-28 19:50:19 +0000 UTC - event for dns-test-service-2: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint dns-1808/dns-test-service-2: Operation cannot be fulfilled on endpoints "dns-test-service-2": the object has been modified; please apply your changes to the latest version and try again
Sep 28 19:50:19.148: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 28 19:50:19.148: INFO: 
Sep 28 19:50:19.187: INFO: 
Logging node info for node ip-172-20-36-158.ec2.internal
Sep 28 19:50:19.225: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-158.ec2.internal    93c10778-457f-4d8a-9f72-2e1a50bb9359 44429 0 2021-09-28 19:20:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/instancegroup:nodes-us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-158.ec2.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-36-158.ec2.internal topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-09-28 19:20:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kubelet Update v1 2021-09-28 19:45:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}} {kube-controller-manager Update v1 2021-09-28 19:45:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03f17841d09a5163a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061724672 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956867072 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-09-28 19:46:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-09-28 19:46:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-09-28 19:46:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-28 19:46:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.158,},NodeAddress{Type:ExternalIP,Address:52.91.99.224,},NodeAddress{Type:Hostname,Address:ip-172-20-36-158.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-158.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-91-99-224.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec211721c19a6522a00296d52b725cbb,SystemUUID:ec211721-c19a-6522-a002-96d52b725cbb,BootID:0f938b1e-a6e4-4703-823e-56774e82ed59,KernelVersion:5.10.67-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.4 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[docker.io/library/nginx@sha256:969419c0b7b0a5f40a4d666ad227360de5874930a2b228a7c11e15dedbc6e092 docker.io/library/nginx:latest],SizeBytes:53799606,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:23799574,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890 k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1597^04e19cc8-2093-11ec-8233-426f905fbbb7 kubernetes.io/csi/csi-hostpath-provisioning-8863^2a5d5eac-2092-11ec-bf30-22403311e93d],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Sep 28 19:50:19.226: INFO: 
... skipping 103 lines ...
• Failure [1232.446 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:50:18.976: Unexpected error:
      <*errors.errorString | 0xc00023e250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}
Sep 28 19:50:21.085: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 28 19:42:35.816: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:43:05.855: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:43:05.855: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:43:40.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:44:10.932: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:44:10.933: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:44:45.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:45:15.933: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:45:15.933: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:45:50.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:46:20.932: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:46:20.932: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:46:55.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:47:25.931: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:47:25.931: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:48:00.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:48:30.932: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:48:30.932: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:49:05.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:49:35.933: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:49:35.933: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:50:10.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:50:40.932: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:50:40.932: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:51:15.894: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:51:45.934: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:51:45.934: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:52:20.895: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:52:50.934: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:52:50.934: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:53:25.896: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:53:55.934: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:53:55.935: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:54:25.973: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:54:56.012: INFO: Unable to read jessie_udp@dns-test-service-3.dns-152.svc.cluster.local from pod dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: the server is currently unable to handle the request (get pods dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1)
Sep 28 19:54:56.012: INFO: Lookups using dns-152/dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1 failed for: [wheezy_udp@dns-test-service-3.dns-152.svc.cluster.local jessie_udp@dns-test-service-3.dns-152.svc.cluster.local]

Sep 28 19:54:56.013: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 25 lines ...
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Created: Created container querier
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Started: Started container querier
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4" already present on machine
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Created: Created container jessie-querier
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:42:04 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} Started: Started container jessie-querier
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:43:06 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} BackOff: Back-off restarting failed container
Sep 28 19:54:56.138: INFO: At 2021-09-28 19:43:06 +0000 UTC - event for dns-test-814e4d3f-3339-4e94-913c-0aff3ac28dc1: {kubelet ip-172-20-62-211.ec2.internal} BackOff: Back-off restarting failed container
Sep 28 19:54:56.175: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 28 19:54:56.176: INFO: 
Sep 28 19:54:56.214: INFO: 
Logging node info for node ip-172-20-36-158.ec2.internal
Sep 28 19:54:56.252: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-158.ec2.internal    93c10778-457f-4d8a-9f72-2e1a50bb9359 45484 0 2021-09-28 19:20:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/instancegroup:nodes-us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-158.ec2.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-36-158.ec2.internal topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-09-28 19:20:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kubelet Update v1 2021-09-28 19:45:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}} {kube-controller-manager Update v1 2021-09-28 19:45:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03f17841d09a5163a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061724672 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956867072 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-09-28 19:51:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-09-28 19:51:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-09-28 19:51:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-28 19:51:00 +0000 UTC,LastTransitionTime:2021-09-28 19:20:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.158,},NodeAddress{Type:ExternalIP,Address:52.91.99.224,},NodeAddress{Type:Hostname,Address:ip-172-20-36-158.ec2.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-158.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-91-99-224.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec211721c19a6522a00296d52b725cbb,SystemUUID:ec211721-c19a-6522-a002-96d52b725cbb,BootID:0f938b1e-a6e4-4703-823e-56774e82ed59,KernelVersion:5.10.67-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.4 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[docker.io/library/nginx@sha256:969419c0b7b0a5f40a4d666ad227360de5874930a2b228a7c11e15dedbc6e092 docker.io/library/nginx:latest],SizeBytes:53799606,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:23799574,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890 k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1597^04e19cc8-2093-11ec-8233-426f905fbbb7 kubernetes.io/csi/csi-hostpath-provisioning-8863^2a5d5eac-2092-11ec-bf30-22403311e93d],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Sep 28 19:54:56.252: INFO: 
... skipping 99 lines ...
• Failure [774.586 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:54:56.013: Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":18,"skipped":162,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Sep 28 19:54:57.988: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-fad887d0-80d1-4fa9-ac2c-27888d21963c]
STEP: Verifying pods for RC slow-terminating-unready-pod
Sep 28 19:39:08.205: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Sep 28 19:39:40.384: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:40:12.494: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:40:44.496: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:41:16.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:41:48.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:42:20.491: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:42:52.494: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:43:24.494: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:43:56.514: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:44:28.493: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:45:00.491: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:45:32.499: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:46:04.493: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:46:36.493: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:47:08.493: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:47:40.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:48:12.490: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:48:44.490: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:49:16.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:49:48.491: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:50:20.491: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:50:52.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:51:24.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:51:56.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:52:28.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:53:00.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:53:32.493: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:54:04.492: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:54:36.491: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:55:08.494: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:55:38.602: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-6c4wd]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-6c4wd)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63768454748, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.62.211", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00361ed98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003526000), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0031fe23d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Sep 28 19:55:38.603: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ac02d0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.21()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688 +0xb99
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003c68180)
... skipping 12 lines ...
STEP: Found 7 events.
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulCreate: Created pod: slow-terminating-unready-pod-6c4wd
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod-6c4wd: {default-scheduler } Scheduled: Successfully assigned services-3991/slow-terminating-unready-pod-6c4wd to ip-172-20-62-211.ec2.internal
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod-6c4wd: {kubelet ip-172-20-62-211.ec2.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod-6c4wd: {kubelet ip-172-20-62-211.ec2.internal} Created: Created container slow-terminating-unready-pod
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod-6c4wd: {kubelet ip-172-20-62-211.ec2.internal} Started: Started container slow-terminating-unready-pod
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:39:08 +0000 UTC - event for slow-terminating-unready-pod-6c4wd: {kubelet ip-172-20-62-211.ec2.internal} Unhealthy: Readiness probe failed: 
Sep 28 19:55:38.793: INFO: At 2021-09-28 19:55:38 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulDelete: Deleted pod: slow-terminating-unready-pod-6c4wd
Sep 28 19:55:38.828: INFO: POD                                 NODE                           PHASE    GRACE  CONDITIONS
Sep 28 19:55:38.828: INFO: slow-terminating-unready-pod-6c4wd  ip-172-20-62-211.ec2.internal  Running  600s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:39:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:39:08 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:39:08 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-28 19:39:08 +0000 UTC  }]
Sep 28 19:55:38.828: INFO: 
Sep 28 19:55:38.864: INFO: 
Logging node info for node ip-172-20-36-158.ec2.internal
... skipping 103 lines ...
• Failure [992.559 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624

  Sep 28 19:55:38.603: Unexpected error:
      <*errors.errorString | 0xc003ac02d0>: {
          s: "failed to wait for pods responding: timed out waiting for the condition",
      }
      failed to wait for pods responding: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688
------------------------------
{"msg":"FAILED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":25,"skipped":118,"failed":1,"failures":["[sig-network] Services should create endpoints for unready pods"]}
Sep 28 19:55:40.481: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":11,"skipped":83,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 28 19:37:16.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
Sep 28 19:38:55.264: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:39:25.303: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:39:55.342: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:40:25.380: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:40:55.417: INFO: Unable to read jessie_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:41:25.456: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:41:25.456: INFO: Lookups using dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:42:00.495: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:42:30.533: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:43:00.571: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:43:30.609: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:44:00.653: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:44:30.691: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:45:00.729: INFO: Unable to read jessie_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:45:30.767: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:45:30.767: INFO: Lookups using dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:46:05.505: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:46:35.543: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:47:05.580: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:47:35.619: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:48:05.657: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:48:35.695: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:49:05.734: INFO: Unable to read jessie_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:49:35.772: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:49:35.772: INFO: Lookups using dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:50:10.495: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:50:40.532: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:51:10.571: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:51:40.610: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:52:10.648: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:52:40.687: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:53:10.725: INFO: Unable to read jessie_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:53:40.764: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:53:40.764: INFO: Lookups using dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:54:10.803: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:54:40.842: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:55:10.880: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:55:40.918: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:56:10.961: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:56:41.000: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:57:11.038: INFO: Unable to read jessie_udp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:57:41.077: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d: the server is currently unable to handle the request (get pods dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d)
Sep 28 19:57:41.077: INFO: Lookups using dns-2050/dns-test-40f33891-a2db-4bf0-9a1c-247d31409c2d failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Sep 28 19:57:41.077: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 136 lines ...
• Failure [1226.161 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 28 19:57:41.077: Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":11,"skipped":83,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}
Sep 28 19:57:42.966: INFO: Running AfterSuite actions on all nodes


[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":19,"skipped":142,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
Sep 28 20:09:27.276: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":223,"failed":2,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
Sep 28 19:46:39.484: INFO: Running AfterSuite actions on all nodes
Sep 28 20:09:27.335: INFO: Running AfterSuite actions on node 1
Sep 28 20:09:27.335: INFO: Dumping logs locally to: /logs/artifacts/5fa0a8b6-2090-11ec-b06e-0a54576c5767
Sep 28 20:09:27.336: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 59 Failures:

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny attaching pod [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:961

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing mutating webhooks should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:680

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1361

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny pod and configmap creation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:909

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should deny crd creation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059

[Fail] [sig-node] PreStop [It] should call prestop when killing a pod  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151

[Fail] [sig-network] Services [It] should be able to change the type from NodePort to ExternalName [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1437

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-apps] ReplicaSet [It] should serve a basic image on each replica with a public image  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:102

[Fail] [sig-network] Services [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny custom resource creation, update and deletion [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1749

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate pod and apply defaults after mutation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Services [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201

[Fail] [sig-network] DNS [It] should provide DNS for services  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211

[Fail] [sig-cli] Kubectl client Update Demo [It] should create and stop a replication controller  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311

[Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a validating webhook should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432

[Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493

[Fail] [sig-network] Conntrack [It] should drop INVALID conntrack entries 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert a non homogeneous list of CRs [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should honor timeout [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/headless 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1941

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372

[Fail] [sig-api-machinery] Aggregator [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406

[Fail] [sig-network] Services [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572

[Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169

[Fail] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns [It] should create 3 PVs and 3 PVCs: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248

[Fail] [sig-network] DNS [It] should resolve DNS of partial qualified names for the cluster [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/service-proxy-name 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889

[Fail] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs [It] should create a non-pre-bound PV and PVC: test write access  
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52

[Fail] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute prestop exec hook properly [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79

[Fail] [sig-cli] Kubectl client Update Demo [It] should scale a replication controller  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert from CR v1 to CR v2 [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate configmap [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:988

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Proxy version v1 [It] should proxy through a service and a pod  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572

[Fail] [sig-network] DNS [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211

[Fail] [sig-network] DNS [It] should provide DNS for pods for Subdomain [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing validating webhooks should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with pruning [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a mutating webhook should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527

[Fail] [sig-network] Services [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493

[Fail] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns [It] should create 2 PVs and 4 PVCs: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should unconditionally reject operations on fail closed webhook [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93

[Fail] [sig-network] Services [It] should be able to up and down services 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1031

[Fail] [sig-network] DNS [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with different stored version [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs [It] create a PVC and non-pre-bound PV: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52

[Fail] [sig-apps] ReplicationController [It] should serve a basic image on each replica with a public image  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65

[Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for intra-pod communication: http [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82

[Fail] [sig-network] DNS [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-network] DNS [It] should provide DNS for ExternalName services [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-network] Services [It] should create endpoints for unready pods 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688

[Fail] [sig-network] DNS [It] should provide DNS for the cluster  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

Ran 754 of 5770 Specs in 2727.422 seconds
FAIL! -- 695 Passed | 59 Failed | 0 Pending | 5016 Skipped


Ginkgo ran 1 suite in 45m37.333421377s
Test Suite Failed
F0928 20:09:27.374222    4798 tester.go:399] failed to run ginkgo tester: exit status 1
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x1c35da0, 0x3, {0x0, 0x0}, 0xc000254000, 0x0, {0x1626733, 0xc00022e080}, 0x0, 0x0)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printf(0xc00006e738, 0x46045b, {0x0, 0x0}, {0x0, 0x0}, {0x11cd4d5, 0x1f}, {0xc00022e080, 0x1, ...})
... skipping 1420 lines ...
route-table:rtb-00458a3e4fe2759d3	ok
vpc:vpc-0634c05758e29a8a8	ok
dhcp-options:dopt-0527835dadf831c3e	ok
Deleted kubectl config for e2e-b08e534318-62691.test-cncf-aws.k8s.io

Deleted cluster: "e2e-b08e534318-62691.test-cncf-aws.k8s.io"
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace