This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-19 19:14
Elapsed56m41s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 132 lines ...
I1019 19:15:31.028631    4732 up.go:43] Cleaning up any leaked resources from previous cluster
I1019 19:15:31.028745    4732 dumplogs.go:40] /logs/artifacts/bce76def-3110-11ec-80d4-3604f5208a0d/kops toolbox dump --name e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I1019 19:15:31.044199    4753 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1019 19:15:31.044330    4753 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-e05d2a908c-62691.test-cncf-aws.k8s.io" not found
W1019 19:15:31.548253    4732 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1019 19:15:31.548324    4732 down.go:48] /logs/artifacts/bce76def-3110-11ec-80d4-3604f5208a0d/kops delete cluster --name e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --yes
I1019 19:15:31.563883    4764 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1019 19:15:31.564119    4764 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-e05d2a908c-62691.test-cncf-aws.k8s.io" not found
I1019 19:15:32.079991    4732 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/19 19:15:32 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1019 19:15:32.090427    4732 http.go:37] curl https://ip.jsb.workers.dev
I1019 19:15:32.214602    4732 up.go:144] /logs/artifacts/bce76def-3110-11ec-80d4-3604f5208a0d/kops create cluster --name e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2905.2.5-hvm --channel=alpha --networking=kopeio --container-runtime=containerd --admin-access 34.123.84.185/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-1a --master-size c5.large
I1019 19:15:32.226950    4774 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1019 19:15:32.227036    4774 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1019 19:15:32.279403    4774 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1019 19:15:32.871071    4774 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I1019 19:15:59.806633    4732 up.go:181] /logs/artifacts/bce76def-3110-11ec-80d4-3604f5208a0d/kops validate cluster --name e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I1019 19:15:59.820617    4794 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1019 19:15:59.820745    4794 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-e05d2a908c-62691.test-cncf-aws.k8s.io

W1019 19:16:01.206255    4794 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:16:11.249344    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:16:21.283362    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:16:31.321557    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:16:41.363064    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:16:51.399726    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:01.440755    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:11.488701    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:21.530128    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:31.579634    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:41.629873    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:17:51.669359    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
W1019 19:18:01.696334    4794 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:18:11.737670    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:18:21.786397    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:18:31.824702    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:18:41.870103    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:18:51.905597    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:19:01.945529    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:19:12.016295    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1019 19:19:22.048733    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 7 lines ...
Machine	i-069822d95ce9600b4				machine "i-069822d95ce9600b4" has not yet joined cluster
Machine	i-0b1dfe07407894a04				machine "i-0b1dfe07407894a04" has not yet joined cluster
Machine	i-0d26a899e9cb9e737				machine "i-0d26a899e9cb9e737" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-9rkwf		system-cluster-critical pod "coredns-5dc785954d-9rkwf" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-pkbc9	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-pkbc9" is pending

Validation Failed
W1019 19:19:35.178535    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Machine	i-0d26a899e9cb9e737				machine "i-0d26a899e9cb9e737" has not yet joined cluster
Node	ip-172-20-43-129.eu-west-1.compute.internal	node "ip-172-20-43-129.eu-west-1.compute.internal" of role "node" is not ready

Validation Failed
W1019 19:19:47.210681    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Machine	i-0d26a899e9cb9e737				machine "i-0d26a899e9cb9e737" has not yet joined cluster
Pod	kube-system/kopeio-networking-agent-xrm5b	system-node-critical pod "kopeio-networking-agent-xrm5b" is pending

Validation Failed
W1019 19:19:59.159575    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 21 lines ...
ip-172-20-55-71.eu-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-55-71.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-55-71.eu-west-1.compute.internal" is pending

Validation Failed
W1019 19:20:23.024818    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-35-5.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-35-5.eu-west-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-52-34.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-52-34.eu-west-1.compute.internal" is pending

Validation Failed
W1019 19:20:34.925810    4794 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes-eu-west-1a	Node	t3.medium	4	4	eu-west-1a

... skipping 663 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 150 lines ...
STEP: Destroying namespace "pod-disks-1924" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.858 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:03.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-5119" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:04.216: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:06.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-1575" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":2,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct 19 19:23:02.881: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-0b20370e-2104-465d-b576-fad65c597d97
STEP: Creating a pod to test consume secrets
Oct 19 19:23:03.309: INFO: Waiting up to 5m0s for pod "pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf" in namespace "secrets-693" to be "Succeeded or Failed"
Oct 19 19:23:03.421: INFO: Pod "pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf": Phase="Pending", Reason="", readiness=false. Elapsed: 111.742031ms
Oct 19 19:23:05.529: INFO: Pod "pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219176569s
Oct 19 19:23:07.637: INFO: Pod "pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327162059s
STEP: Saw pod success
Oct 19 19:23:07.637: INFO: Pod "pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf" satisfied condition "Succeeded or Failed"
Oct 19 19:23:07.744: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf container secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:23:07.979: INFO: Waiting for pod pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf to disappear
Oct 19 19:23:08.086: INFO: Pod pod-secrets-97469275-7730-4f32-ab25-d9f19d5e37cf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.523 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:08.420: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
Oct 19 19:23:02.291: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-414b5072-74a4-4923-a063-4f209faff70b
STEP: Creating a pod to test consume configMaps
Oct 19 19:23:02.729: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81" in namespace "projected-7445" to be "Succeeded or Failed"
Oct 19 19:23:02.837: INFO: Pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81": Phase="Pending", Reason="", readiness=false. Elapsed: 107.981362ms
Oct 19 19:23:04.965: INFO: Pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236114559s
Oct 19 19:23:07.072: INFO: Pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343173778s
Oct 19 19:23:09.180: INFO: Pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.451169268s
STEP: Saw pod success
Oct 19 19:23:09.180: INFO: Pod "pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81" satisfied condition "Succeeded or Failed"
Oct 19 19:23:09.287: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81 container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:23:09.513: INFO: Waiting for pod pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81 to disappear
Oct 19 19:23:09.620: INFO: Pod pod-projected-configmaps-97b03666-f102-44e2-93e8-11eb7f7ccb81 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.078 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:09.955: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Oct 19 19:23:03.678: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Oct 19 19:23:04.043: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756" in namespace "security-context-test-187" to be "Succeeded or Failed"
Oct 19 19:23:04.150: INFO: Pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756": Phase="Pending", Reason="", readiness=false. Elapsed: 107.33998ms
Oct 19 19:23:06.258: INFO: Pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21517208s
Oct 19 19:23:08.366: INFO: Pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32331351s
Oct 19 19:23:10.475: INFO: Pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431824159s
Oct 19 19:23:10.475: INFO: Pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756" satisfied condition "Succeeded or Failed"
Oct 19 19:23:10.584: INFO: Got logs for pod "busybox-privileged-true-1731bcdc-0676-4c5a-989f-6f0dd0ae1756": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-187" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:10.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:23:09.094: INFO: Waiting up to 5m0s for pod "metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045" in namespace "projected-7597" to be "Succeeded or Failed"
Oct 19 19:23:09.204: INFO: Pod "metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045": Phase="Pending", Reason="", readiness=false. Elapsed: 109.803762ms
Oct 19 19:23:11.312: INFO: Pod "metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218166041s
STEP: Saw pod success
Oct 19 19:23:11.312: INFO: Pod "metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045" satisfied condition "Succeeded or Failed"
Oct 19 19:23:11.419: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045 container client-container: <nil>
STEP: delete the pod
Oct 19 19:23:11.638: INFO: Waiting for pod metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045 to disappear
Oct 19 19:23:11.745: INFO: Pod metadata-volume-5817b5bf-6b03-4739-8107-13b1407cc045 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:11.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7597" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct 19 19:23:02.313: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-2cd22307-cc1c-4582-bea8-e02286e31320
STEP: Creating a pod to test consume configMaps
Oct 19 19:23:02.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5" in namespace "configmap-9478" to be "Succeeded or Failed"
Oct 19 19:23:02.849: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 105.342372ms
Oct 19 19:23:04.966: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222386839s
Oct 19 19:23:07.071: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328169168s
Oct 19 19:23:09.179: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435455898s
Oct 19 19:23:11.289: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.545986748s
STEP: Saw pod success
Oct 19 19:23:11.289: INFO: Pod "pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5" satisfied condition "Succeeded or Failed"
Oct 19 19:23:11.395: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5 container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:23:11.909: INFO: Waiting for pod pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5 to disappear
Oct 19 19:23:12.014: INFO: Pod pod-configmaps-47b4ba95-0655-4098-bb6f-63a569b25fe5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.459 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:13.921 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:13.139 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:16.723: INFO: Only supported for providers [gce gke] (not aws)
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:16.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-7614" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:16.765: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-6666f5a0-2fdf-4666-bc47-d9733f7943f5
STEP: Creating a pod to test consume secrets
Oct 19 19:23:10.727: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3" in namespace "projected-2645" to be "Succeeded or Failed"
Oct 19 19:23:10.835: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 108.291391ms
Oct 19 19:23:12.944: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216787753s
Oct 19 19:23:15.053: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325761954s
Oct 19 19:23:17.160: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3": Phase="Running", Reason="", readiness=true. Elapsed: 6.432839357s
Oct 19 19:23:19.268: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.540598569s
STEP: Saw pod success
Oct 19 19:23:19.268: INFO: Pod "pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3" satisfied condition "Succeeded or Failed"
Oct 19 19:23:19.374: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:23:19.596: INFO: Waiting for pod pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3 to disappear
Oct 19 19:23:19.702: INFO: Pod pod-projected-secrets-37bc1eca-fc37-44f3-a1ef-b098b1d91bc3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.939 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:19.935: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 53 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:23:12.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9" in namespace "downward-api-1443" to be "Succeeded or Failed"
Oct 19 19:23:13.103: INFO: Pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.419202ms
Oct 19 19:23:15.210: INFO: Pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212413433s
Oct 19 19:23:17.316: INFO: Pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318333316s
Oct 19 19:23:19.422: INFO: Pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424039158s
STEP: Saw pod success
Oct 19 19:23:19.422: INFO: Pod "downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9" satisfied condition "Succeeded or Failed"
Oct 19 19:23:19.527: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9 container client-container: <nil>
STEP: delete the pod
Oct 19 19:23:19.760: INFO: Waiting for pod downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9 to disappear
Oct 19 19:23:19.865: INFO: Pod downwardapi-volume-6d6c98b9-3864-441c-a96a-c07ce3a436a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.720 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:20.112: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:11.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 106 lines ...
• [SLOW TEST:15.681 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:27.402: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
Oct 19 19:23:15.347: INFO: PersistentVolumeClaim pvc-9f9db found but phase is Pending instead of Bound.
Oct 19 19:23:17.456: INFO: PersistentVolumeClaim pvc-9f9db found and phase=Bound (6.430962868s)
Oct 19 19:23:17.456: INFO: Waiting up to 3m0s for PersistentVolume local-htv7t to have phase Bound
Oct 19 19:23:17.563: INFO: PersistentVolume local-htv7t found and phase=Bound (106.861111ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dwx8
STEP: Creating a pod to test subpath
Oct 19 19:23:17.888: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dwx8" in namespace "provisioning-1677" to be "Succeeded or Failed"
Oct 19 19:23:17.995: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.858842ms
Oct 19 19:23:20.104: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215660383s
Oct 19 19:23:22.214: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326053017s
Oct 19 19:23:24.323: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434390439s
Oct 19 19:23:26.441: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.55247552s
STEP: Saw pod success
Oct 19 19:23:26.441: INFO: Pod "pod-subpath-test-preprovisionedpv-dwx8" satisfied condition "Succeeded or Failed"
Oct 19 19:23:26.549: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-dwx8 container test-container-subpath-preprovisionedpv-dwx8: <nil>
STEP: delete the pod
Oct 19 19:23:26.769: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dwx8 to disappear
Oct 19 19:23:26.876: INFO: Pod pod-subpath-test-preprovisionedpv-dwx8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dwx8
Oct 19 19:23:26.876: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dwx8" in namespace "provisioning-1677"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:29.843: INFO: Only supported for providers [azure] (not aws)
... skipping 177 lines ...
• [SLOW TEST:9.227 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:20.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 19 19:23:21.278: INFO: Waiting up to 5m0s for pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8" in namespace "emptydir-3409" to be "Succeeded or Failed"
Oct 19 19:23:21.384: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.233062ms
Oct 19 19:23:23.490: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212751266s
Oct 19 19:23:25.597: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319185947s
Oct 19 19:23:27.716: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438320139s
Oct 19 19:23:29.823: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.545663194s
STEP: Saw pod success
Oct 19 19:23:29.823: INFO: Pod "pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8" satisfied condition "Succeeded or Failed"
Oct 19 19:23:29.930: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8 container test-container: <nil>
STEP: delete the pod
Oct 19 19:23:30.146: INFO: Waiting for pod pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8 to disappear
Oct 19 19:23:30.252: INFO: Pod pod-96a1f30e-ec2f-4e86-96c2-414e1ca361d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 19 19:23:30.618: INFO: Waiting up to 5m0s for pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800" in namespace "emptydir-2568" to be "Succeeded or Failed"
Oct 19 19:23:30.725: INFO: Pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800": Phase="Pending", Reason="", readiness=false. Elapsed: 106.978062ms
Oct 19 19:23:32.838: INFO: Pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219706306s
Oct 19 19:23:34.946: INFO: Pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328263014s
Oct 19 19:23:37.054: INFO: Pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436486891s
STEP: Saw pod success
Oct 19 19:23:37.054: INFO: Pod "pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800" satisfied condition "Succeeded or Failed"
Oct 19 19:23:37.161: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800 container test-container: <nil>
STEP: delete the pod
Oct 19 19:23:37.383: INFO: Waiting for pod pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800 to disappear
Oct 19 19:23:37.490: INFO: Pod pod-267d757b-fe4c-4ae8-8a0f-9544b16e8800 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:40.941: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:35.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Oct 19 19:23:36.451: INFO: Waiting up to 5m0s for pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728" in namespace "security-context-test-2998" to be "Succeeded or Failed"
Oct 19 19:23:36.557: INFO: Pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728": Phase="Pending", Reason="", readiness=false. Elapsed: 105.836623ms
Oct 19 19:23:38.668: INFO: Pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216480489s
Oct 19 19:23:40.774: INFO: Pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323046716s
Oct 19 19:23:42.880: INFO: Pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428996982s
Oct 19 19:23:42.880: INFO: Pod "busybox-user-0-e03ee3fd-19f3-462e-b7a9-963556e67728" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:23:42.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2998" for this suite.


... skipping 30 lines ...
Oct 19 19:23:31.268: INFO: PersistentVolumeClaim pvc-6gmnp found but phase is Pending instead of Bound.
Oct 19 19:23:33.375: INFO: PersistentVolumeClaim pvc-6gmnp found and phase=Bound (10.638340112s)
Oct 19 19:23:33.375: INFO: Waiting up to 3m0s for PersistentVolume local-9gkzd to have phase Bound
Oct 19 19:23:33.481: INFO: PersistentVolume local-9gkzd found and phase=Bound (106.273052ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fqdq
STEP: Creating a pod to test subpath
Oct 19 19:23:33.803: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fqdq" in namespace "provisioning-7189" to be "Succeeded or Failed"
Oct 19 19:23:33.909: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq": Phase="Pending", Reason="", readiness=false. Elapsed: 106.016272ms
Oct 19 19:23:36.016: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2133592s
Oct 19 19:23:38.122: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319631736s
Oct 19 19:23:40.231: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428515733s
Oct 19 19:23:42.345: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.542448339s
STEP: Saw pod success
Oct 19 19:23:42.345: INFO: Pod "pod-subpath-test-preprovisionedpv-fqdq" satisfied condition "Succeeded or Failed"
Oct 19 19:23:42.454: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-fqdq container test-container-subpath-preprovisionedpv-fqdq: <nil>
STEP: delete the pod
Oct 19 19:23:42.693: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fqdq to disappear
Oct 19 19:23:42.803: INFO: Pod pod-subpath-test-preprovisionedpv-fqdq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fqdq
Oct 19 19:23:42.803: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fqdq" in namespace "provisioning-7189"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 131 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:30.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:21.940 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:52.458: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 107 lines ...
• [SLOW TEST:10.221 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:23:52.589: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":14,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:00.327: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 60 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:10.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 96 lines ...
• [SLOW TEST:50.533 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:130
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct 19 19:23:31.752: INFO: PersistentVolumeClaim pvc-hndpt found but phase is Pending instead of Bound.
Oct 19 19:23:33.859: INFO: PersistentVolumeClaim pvc-hndpt found and phase=Bound (8.544327061s)
Oct 19 19:23:33.859: INFO: Waiting up to 3m0s for PersistentVolume local-4tv8f to have phase Bound
Oct 19 19:23:33.965: INFO: PersistentVolume local-4tv8f found and phase=Bound (106.017732ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kfqq
STEP: Creating a pod to test atomic-volume-subpath
Oct 19 19:23:34.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kfqq" in namespace "provisioning-6625" to be "Succeeded or Failed"
Oct 19 19:23:34.395: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 106.187972ms
Oct 19 19:23:36.503: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213592749s
Oct 19 19:23:38.609: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319831777s
Oct 19 19:23:40.715: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426482554s
Oct 19 19:23:42.825: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53581107s
Oct 19 19:23:44.940: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Running", Reason="", readiness=true. Elapsed: 10.650786766s
... skipping 2 lines ...
Oct 19 19:23:51.262: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Running", Reason="", readiness=true. Elapsed: 16.973048614s
Oct 19 19:23:53.369: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Running", Reason="", readiness=true. Elapsed: 19.080025275s
Oct 19 19:23:55.476: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Running", Reason="", readiness=true. Elapsed: 21.187127517s
Oct 19 19:23:57.584: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Running", Reason="", readiness=true. Elapsed: 23.294615388s
Oct 19 19:23:59.692: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.403358429s
STEP: Saw pod success
Oct 19 19:23:59.692: INFO: Pod "pod-subpath-test-preprovisionedpv-kfqq" satisfied condition "Succeeded or Failed"
Oct 19 19:23:59.799: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-kfqq container test-container-subpath-preprovisionedpv-kfqq: <nil>
STEP: delete the pod
Oct 19 19:24:00.023: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kfqq to disappear
Oct 19 19:24:00.130: INFO: Pod pod-subpath-test-preprovisionedpv-kfqq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kfqq
Oct 19 19:24:00.130: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kfqq" in namespace "provisioning-6625"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:03.190: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
Oct 19 19:23:30.868: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1407dwxgk
STEP: creating a claim
Oct 19 19:23:30.975: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hjkj
STEP: Creating a pod to test subpath
Oct 19 19:23:31.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hjkj" in namespace "provisioning-1407" to be "Succeeded or Failed"
Oct 19 19:23:31.405: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.898252ms
Oct 19 19:23:33.512: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213869948s
Oct 19 19:23:35.619: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321441606s
Oct 19 19:23:37.727: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428857313s
Oct 19 19:23:39.835: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53738484s
Oct 19 19:23:41.943: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645337936s
Oct 19 19:23:44.051: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752780313s
Oct 19 19:23:46.159: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.861095371s
Oct 19 19:23:48.267: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.96877434s
Oct 19 19:23:50.375: INFO: Pod "pod-subpath-test-dynamicpv-hjkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.07699618s
STEP: Saw pod success
Oct 19 19:23:50.375: INFO: Pod "pod-subpath-test-dynamicpv-hjkj" satisfied condition "Succeeded or Failed"
Oct 19 19:23:50.482: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-hjkj container test-container-volume-dynamicpv-hjkj: <nil>
STEP: delete the pod
Oct 19 19:23:50.702: INFO: Waiting for pod pod-subpath-test-dynamicpv-hjkj to disappear
Oct 19 19:23:50.816: INFO: Pod pod-subpath-test-dynamicpv-hjkj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hjkj
Oct 19 19:23:50.816: INFO: Deleting pod "pod-subpath-test-dynamicpv-hjkj" in namespace "provisioning-1407"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:07.159: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:55.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:12.510 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct 19 19:24:00.400: INFO: PersistentVolumeClaim pvc-vlk6d found but phase is Pending instead of Bound.
Oct 19 19:24:02.509: INFO: PersistentVolumeClaim pvc-vlk6d found and phase=Bound (12.75589646s)
Oct 19 19:24:02.509: INFO: Waiting up to 3m0s for PersistentVolume local-m78kb to have phase Bound
Oct 19 19:24:02.616: INFO: PersistentVolume local-m78kb found and phase=Bound (107.169442ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9nvg
STEP: Creating a pod to test subpath
Oct 19 19:24:02.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9nvg" in namespace "provisioning-1896" to be "Succeeded or Failed"
Oct 19 19:24:03.049: INFO: Pod "pod-subpath-test-preprovisionedpv-9nvg": Phase="Pending", Reason="", readiness=false. Elapsed: 107.040311ms
Oct 19 19:24:05.157: INFO: Pod "pod-subpath-test-preprovisionedpv-9nvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215067384s
Oct 19 19:24:07.266: INFO: Pod "pod-subpath-test-preprovisionedpv-9nvg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.323815499s
STEP: Saw pod success
Oct 19 19:24:07.266: INFO: Pod "pod-subpath-test-preprovisionedpv-9nvg" satisfied condition "Succeeded or Failed"
Oct 19 19:24:07.373: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9nvg container test-container-volume-preprovisionedpv-9nvg: <nil>
STEP: delete the pod
Oct 19 19:24:07.604: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9nvg to disappear
Oct 19 19:24:07.733: INFO: Pod pod-subpath-test-preprovisionedpv-9nvg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9nvg
Oct 19 19:24:07.733: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9nvg" in namespace "provisioning-1896"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:09.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:24:10.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2411" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:10.418: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:24:11.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-6927" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:24:11.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-8438" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:06.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Oct 19 19:24:07.090: INFO: Waiting up to 5m0s for pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e" in namespace "svcaccounts-594" to be "Succeeded or Failed"
Oct 19 19:24:07.197: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 106.621912ms
Oct 19 19:24:09.305: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215238666s
Oct 19 19:24:11.411: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321350181s
Oct 19 19:24:13.517: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427062187s
Oct 19 19:24:15.622: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532133279s
Oct 19 19:24:17.728: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637787094s
Oct 19 19:24:19.833: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.743125608s
STEP: Saw pod success
Oct 19 19:24:19.833: INFO: Pod "test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e" satisfied condition "Succeeded or Failed"
Oct 19 19:24:19.938: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:24:20.169: INFO: Waiting for pod test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e to disappear
Oct 19 19:24:20.274: INFO: Pod test-pod-e5ada89a-bcd9-483d-8c98-5d09ea3bd08e no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.048 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":2,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:20.542: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 81 lines ...
Oct 19 19:23:37.840: INFO: PersistentVolumeClaim csi-hostpathk2z6w found but phase is Pending instead of Bound.
Oct 19 19:23:39.969: INFO: PersistentVolumeClaim csi-hostpathk2z6w found but phase is Pending instead of Bound.
Oct 19 19:23:42.076: INFO: PersistentVolumeClaim csi-hostpathk2z6w found but phase is Pending instead of Bound.
Oct 19 19:23:44.185: INFO: PersistentVolumeClaim csi-hostpathk2z6w found and phase=Bound (19.093693753s)
STEP: Creating pod pod-subpath-test-dynamicpv-tsgh
STEP: Creating a pod to test subpath
Oct 19 19:23:44.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tsgh" in namespace "provisioning-8701" to be "Succeeded or Failed"
Oct 19 19:23:44.632: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 125.743911ms
Oct 19 19:23:46.746: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239735019s
Oct 19 19:23:48.854: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346922208s
Oct 19 19:23:50.962: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455110198s
Oct 19 19:23:53.069: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561955369s
Oct 19 19:23:55.176: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66981463s
Oct 19 19:23:57.283: INFO: Pod "pod-subpath-test-dynamicpv-tsgh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.776577091s
STEP: Saw pod success
Oct 19 19:23:57.283: INFO: Pod "pod-subpath-test-dynamicpv-tsgh" satisfied condition "Succeeded or Failed"
Oct 19 19:23:57.389: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-tsgh container test-container-subpath-dynamicpv-tsgh: <nil>
STEP: delete the pod
Oct 19 19:23:57.614: INFO: Waiting for pod pod-subpath-test-dynamicpv-tsgh to disappear
Oct 19 19:23:57.720: INFO: Pod pod-subpath-test-dynamicpv-tsgh no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tsgh
Oct 19 19:23:57.720: INFO: Deleting pod "pod-subpath-test-dynamicpv-tsgh" in namespace "provisioning-8701"
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:24:21.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9259" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:21.840: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 165 lines ...
STEP: Creating a validating webhook configuration
Oct 19 19:23:37.627: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:23:47.946: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:23:58.247: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:24:08.548: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:24:18.765: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:24:18.766: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 468 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:24:18.766: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":1,"skipped":13,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:29.515: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:22.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 19 19:24:22.851: INFO: Waiting up to 5m0s for pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4" in namespace "downward-api-5259" to be "Succeeded or Failed"
Oct 19 19:24:22.957: INFO: Pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4": Phase="Pending", Reason="", readiness=false. Elapsed: 106.398202ms
Oct 19 19:24:25.065: INFO: Pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213908299s
Oct 19 19:24:27.174: INFO: Pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322844499s
Oct 19 19:24:29.281: INFO: Pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429811515s
STEP: Saw pod success
Oct 19 19:24:29.281: INFO: Pod "downward-api-34d60443-6363-4c92-942a-d064c9e812f4" satisfied condition "Succeeded or Failed"
Oct 19 19:24:29.387: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod downward-api-34d60443-6363-4c92-942a-d064c9e812f4 container dapi-container: <nil>
STEP: delete the pod
Oct 19 19:24:29.609: INFO: Waiting for pod downward-api-34d60443-6363-4c92-942a-d064c9e812f4 to disappear
Oct 19 19:24:29.717: INFO: Pod downward-api-34d60443-6363-4c92-942a-d064c9e812f4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 32 lines ...
• [SLOW TEST:9.697 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 170 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:34.037: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 60 lines ...
Oct 19 19:23:03.136: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-51482b5cb
STEP: creating a claim
Oct 19 19:23:03.243: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-fwk8
STEP: Creating a pod to test subpath
Oct 19 19:23:03.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fwk8" in namespace "provisioning-5148" to be "Succeeded or Failed"
Oct 19 19:23:03.680: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 105.816171ms
Oct 19 19:23:05.787: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212905159s
Oct 19 19:23:07.894: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3199392s
Oct 19 19:23:10.001: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426334969s
Oct 19 19:23:12.108: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53398777s
Oct 19 19:23:14.216: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641081552s
Oct 19 19:23:16.323: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.748339374s
Oct 19 19:23:18.429: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.854928986s
Oct 19 19:23:20.537: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.962655688s
Oct 19 19:23:22.645: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.070029562s
Oct 19 19:23:24.751: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.176115954s
STEP: Saw pod success
Oct 19 19:23:24.751: INFO: Pod "pod-subpath-test-dynamicpv-fwk8" satisfied condition "Succeeded or Failed"
Oct 19 19:23:24.856: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-fwk8 container test-container-subpath-dynamicpv-fwk8: <nil>
STEP: delete the pod
Oct 19 19:23:25.075: INFO: Waiting for pod pod-subpath-test-dynamicpv-fwk8 to disappear
Oct 19 19:23:25.181: INFO: Pod pod-subpath-test-dynamicpv-fwk8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fwk8
Oct 19 19:23:25.181: INFO: Deleting pod "pod-subpath-test-dynamicpv-fwk8" in namespace "provisioning-5148"
STEP: Creating pod pod-subpath-test-dynamicpv-fwk8
STEP: Creating a pod to test subpath
Oct 19 19:23:25.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fwk8" in namespace "provisioning-5148" to be "Succeeded or Failed"
Oct 19 19:23:25.501: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 105.501373ms
Oct 19 19:23:27.608: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212551295s
Oct 19 19:23:29.717: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32164992s
Oct 19 19:23:31.824: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428378455s
Oct 19 19:23:33.930: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534849143s
Oct 19 19:23:36.036: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.64087921s
... skipping 8 lines ...
Oct 19 19:23:55.013: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.617084095s
Oct 19 19:23:57.119: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.723833826s
Oct 19 19:23:59.226: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.830590688s
Oct 19 19:24:01.332: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.936873479s
Oct 19 19:24:03.439: INFO: Pod "pod-subpath-test-dynamicpv-fwk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.043853132s
STEP: Saw pod success
Oct 19 19:24:03.439: INFO: Pod "pod-subpath-test-dynamicpv-fwk8" satisfied condition "Succeeded or Failed"
Oct 19 19:24:03.548: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-fwk8 container test-container-subpath-dynamicpv-fwk8: <nil>
STEP: delete the pod
Oct 19 19:24:03.772: INFO: Waiting for pod pod-subpath-test-dynamicpv-fwk8 to disappear
Oct 19 19:24:03.878: INFO: Pod pod-subpath-test-dynamicpv-fwk8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fwk8
Oct 19 19:24:03.878: INFO: Deleting pod "pod-subpath-test-dynamicpv-fwk8" in namespace "provisioning-5148"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:33.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 64 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:29.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 34 lines ...
• [SLOW TEST:14.076 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:44.038: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:33.632: INFO: >>> kubeConfig: /root/.kube/config
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":3,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:48.148: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:23:43.108: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
Oct 19 19:23:54.588: INFO: PersistentVolumeClaim csi-hostpathjf9f8 found but phase is Pending instead of Bound.
Oct 19 19:23:56.694: INFO: PersistentVolumeClaim csi-hostpathjf9f8 found but phase is Pending instead of Bound.
Oct 19 19:23:58.801: INFO: PersistentVolumeClaim csi-hostpathjf9f8 found but phase is Pending instead of Bound.
Oct 19 19:24:00.907: INFO: PersistentVolumeClaim csi-hostpathjf9f8 found and phase=Bound (12.744217547s)
STEP: Expanding non-expandable pvc
Oct 19 19:24:01.118: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 19 19:24:01.330: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:03.546: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:05.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:07.544: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:09.547: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:11.546: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:13.544: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:15.546: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:17.556: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:19.544: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:21.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:23.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:25.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:27.544: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:29.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:31.543: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 19 19:24:31.756: INFO: Error updating pvc csi-hostpathjf9f8: persistentvolumeclaims "csi-hostpathjf9f8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct 19 19:24:31.756: INFO: Deleting PersistentVolumeClaim "csi-hostpathjf9f8"
Oct 19 19:24:31.865: INFO: Waiting up to 5m0s for PersistentVolume pvc-41ff3d46-da23-47ad-8501-0c78c0f9e003 to get deleted
Oct 19 19:24:31.970: INFO: PersistentVolume pvc-41ff3d46-da23-47ad-8501-0c78c0f9e003 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-8811
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:48.284: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 141 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:50.569: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:111.680 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Oct 19 19:24:44.755: INFO: Waiting up to 5m0s for pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52" in namespace "emptydir-422" to be "Succeeded or Failed"
Oct 19 19:24:44.861: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52": Phase="Pending", Reason="", readiness=false. Elapsed: 106.165502ms
Oct 19 19:24:46.969: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213773994s
Oct 19 19:24:49.079: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323910175s
Oct 19 19:24:51.186: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430764417s
Oct 19 19:24:53.293: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.538293659s
STEP: Saw pod success
Oct 19 19:24:53.293: INFO: Pod "pod-fbe75d15-4563-4509-80cb-044e880b8e52" satisfied condition "Succeeded or Failed"
Oct 19 19:24:53.400: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-fbe75d15-4563-4509-80cb-044e880b8e52 container test-container: <nil>
STEP: delete the pod
Oct 19 19:24:53.619: INFO: Waiting for pod pod-fbe75d15-4563-4509-80cb-044e880b8e52 to disappear
Oct 19 19:24:53.725: INFO: Pod pod-fbe75d15-4563-4509-80cb-044e880b8e52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":8,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:53.974: INFO: Only supported for providers [gce gke] (not aws)
... skipping 193 lines ...
2021/10/19 19:24:49 tcp packet: &{SrcPort:46653 DestPort:9000 Seq:3487090116 Ack:0 Flags:40962 WindowSize:65535 Checksum:22122 UrgentPtr:0}, flag: SYN , data: [], addr: 100.96.1.30
2021/10/19 19:24:49 tcp packet: &{SrcPort:40357 DestPort:9000 Seq:2300790287 Ack:0 Flags:40962 WindowSize:65535 Checksum:13614 UrgentPtr:0}, flag: SYN , data: [], addr: 100.96.1.30
2021/10/19 19:24:49 tcp packet: &{SrcPort:46259 DestPort:9000 Seq:2206978196 Ack:0 Flags:40962 WindowSize:65535 Checksum:39154 UrgentPtr:0}, flag: SYN , data: [], addr: 100.96.1.30
2021/10/19 19:24:49 tcp packet: &{SrcPort:35985 DestPort:9000 Seq:3126085492 Ack:0 Flags:40962 WindowSize:65535 Checksum:4460 UrgentPtr:0}, flag: SYN , data: [], addr: 100.96.1.30
2021/10/19 19:24:50 tcp packet: &{SrcPort:46781 DestPort:9000 Seq:4115738322 Ack:0 Flags:40962 WindowSize:65535 Checksum:50077 UrgentPtr:0}, flag: SYN , data: [], addr: 100.96.1.30

Oct 19 19:24:51.149: FAIL: Boom server pod did not sent any bad packet to the client

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00269ea80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00269ea80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 280 lines ...
Oct 19 19:24:32.890: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 19 19:24:33.768: INFO: Successfully created a new PD: "aws://eu-west-1a/vol-0257f3ea19b841ca8".
Oct 19 19:24:33.768: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-gm7x
STEP: Creating a pod to test exec-volume-test
Oct 19 19:24:33.877: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-gm7x" in namespace "volume-5049" to be "Succeeded or Failed"
Oct 19 19:24:33.983: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 105.854164ms
Oct 19 19:24:36.088: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211452882s
Oct 19 19:24:38.196: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318826352s
Oct 19 19:24:40.301: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424224763s
Oct 19 19:24:42.407: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530391374s
Oct 19 19:24:44.514: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.636772895s
Oct 19 19:24:46.620: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743158278s
Oct 19 19:24:48.726: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849427859s
Oct 19 19:24:50.832: INFO: Pod "exec-volume-test-inlinevolume-gm7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.955252471s
STEP: Saw pod success
Oct 19 19:24:50.832: INFO: Pod "exec-volume-test-inlinevolume-gm7x" satisfied condition "Succeeded or Failed"
Oct 19 19:24:50.940: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod exec-volume-test-inlinevolume-gm7x container exec-container-inlinevolume-gm7x: <nil>
STEP: delete the pod
Oct 19 19:24:51.161: INFO: Waiting for pod exec-volume-test-inlinevolume-gm7x to disappear
Oct 19 19:24:51.266: INFO: Pod exec-volume-test-inlinevolume-gm7x no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-gm7x
Oct 19 19:24:51.266: INFO: Deleting pod "exec-volume-test-inlinevolume-gm7x" in namespace "volume-5049"
Oct 19 19:24:51.671: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0257f3ea19b841ca8", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0257f3ea19b841ca8 is currently attached to i-0b1dfe07407894a04
	status code: 400, request id: a5457a78-47be-475e-861b-1ec64142a9ac
Oct 19 19:24:57.317: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0257f3ea19b841ca8".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:24:57.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5049" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:50.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-11be4a18-40e5-4643-9dbd-5bf7cb4b2491
STEP: Creating a pod to test consume secrets
Oct 19 19:24:51.375: INFO: Waiting up to 5m0s for pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263" in namespace "secrets-2993" to be "Succeeded or Failed"
Oct 19 19:24:51.489: INFO: Pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263": Phase="Pending", Reason="", readiness=false. Elapsed: 114.275271ms
Oct 19 19:24:53.596: INFO: Pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221569724s
Oct 19 19:24:55.703: INFO: Pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328769856s
Oct 19 19:24:57.810: INFO: Pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.435274869s
STEP: Saw pod success
Oct 19 19:24:57.810: INFO: Pod "pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263" satisfied condition "Succeeded or Failed"
Oct 19 19:24:57.916: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263 container secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:24:58.142: INFO: Waiting for pod pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263 to disappear
Oct 19 19:24:58.249: INFO: Pod pod-secrets-37553a8b-fcab-4997-bf95-1d67f80aa263 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.834 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:58.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct 19 19:24:58.809: INFO: found topology map[topology.kubernetes.io/zone:eu-west-1a]
Oct 19 19:24:58.809: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 19 19:24:58.809: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 191 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":21,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:24:59.418: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 100 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:24:59.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b" in namespace "downward-api-6700" to be "Succeeded or Failed"
Oct 19 19:24:59.245: INFO: Pod "downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b": Phase="Pending", Reason="", readiness=false. Elapsed: 106.076683ms
Oct 19 19:25:01.352: INFO: Pod "downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212961958s
STEP: Saw pod success
Oct 19 19:25:01.352: INFO: Pod "downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b" satisfied condition "Succeeded or Failed"
Oct 19 19:25:01.458: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b container client-container: <nil>
STEP: delete the pod
Oct 19 19:25:01.685: INFO: Waiting for pod downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b to disappear
Oct 19 19:25:01.791: INFO: Pod downwardapi-volume-4d25d366-4f73-44be-9856-8c196091326b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:01.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6700" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:05.187: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:53.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:06.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5431" for this suite.


• [SLOW TEST:13.079 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":9,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:25:07.088: INFO: >>> kubeConfig: /root/.kube/config
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:10.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9449" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":10,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:10.573: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:9.375 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":4,"skipped":35,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:11.467: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:17.869: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 180 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct 19 19:25:12.156: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-989" to be "Succeeded or Failed"
Oct 19 19:25:12.262: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 106.099553ms
Oct 19 19:25:14.373: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216887159s
Oct 19 19:25:16.485: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328514354s
Oct 19 19:25:18.590: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434427763s
Oct 19 19:25:18.591: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-989" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9615" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
Oct 19 19:23:15.328: INFO: stdout: "externalname-service-qgf9b"
Oct 19 19:23:15.328: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 31234'
Oct 19 19:23:16.497: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.35.5 31234\nConnection to 172.20.35.5 31234 port [tcp/*] succeeded!\n"
Oct 19 19:23:16.498: INFO: stdout: "externalname-service-qgf9b"
Oct 19 19:23:16.498: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:19.614: INFO: rc: 1
Oct 19 19:23:19.615: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:20.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:23.748: INFO: rc: 1
Oct 19 19:23:23.748: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:24.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:27.760: INFO: rc: 1
Oct 19 19:23:27.760: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 31234
+ echo hostName
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:28.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:31.828: INFO: rc: 1
Oct 19 19:23:31.828: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:32.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:35.771: INFO: rc: 1
Oct 19 19:23:35.771: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:36.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:39.808: INFO: rc: 1
Oct 19 19:23:39.808: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:40.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:43.754: INFO: rc: 1
Oct 19 19:23:43.754: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:44.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:47.766: INFO: rc: 1
Oct 19 19:23:47.766: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:48.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:51.823: INFO: rc: 1
Oct 19 19:23:51.823: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:52.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:55.743: INFO: rc: 1
Oct 19 19:23:55.743: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:23:56.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:23:59.774: INFO: rc: 1
Oct 19 19:23:59.774: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 31234
+ echo hostName
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:00.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:03.734: INFO: rc: 1
Oct 19 19:24:03.734: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:04.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:07.769: INFO: rc: 1
Oct 19 19:24:07.769: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:08.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:11.822: INFO: rc: 1
Oct 19 19:24:11.822: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 31234
+ echo hostName
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:12.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:15.858: INFO: rc: 1
Oct 19 19:24:15.858: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 31234
+ echo hostName
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:16.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:19.743: INFO: rc: 1
Oct 19 19:24:19.743: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:20.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:23.791: INFO: rc: 1
Oct 19 19:24:23.791: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:24.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:27.750: INFO: rc: 1
Oct 19 19:24:27.750: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:28.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:31.779: INFO: rc: 1
Oct 19 19:24:31.779: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:32.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:35.766: INFO: rc: 1
Oct 19 19:24:35.766: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:36.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:39.750: INFO: rc: 1
Oct 19 19:24:39.750: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:40.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:43.753: INFO: rc: 1
Oct 19 19:24:43.753: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:44.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:47.944: INFO: rc: 1
Oct 19 19:24:47.944: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:48.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:51.767: INFO: rc: 1
Oct 19 19:24:51.767: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:52.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:55.747: INFO: rc: 1
Oct 19 19:24:55.747: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:56.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:24:59.823: INFO: rc: 1
Oct 19 19:24:59.823: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:00.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:03.793: INFO: rc: 1
Oct 19 19:25:03.793: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:04.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:07.786: INFO: rc: 1
Oct 19 19:25:07.786: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:08.616: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:11.758: INFO: rc: 1
Oct 19 19:25:11.758: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:12.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:15.761: INFO: rc: 1
Oct 19 19:25:15.761: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:16.615: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:19.767: INFO: rc: 1
Oct 19 19:25:19.767: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:19.767: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234'
Oct 19 19:25:22.901: INFO: rc: 1
Oct 19 19:25:22.901: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4121 exec execpodmhjcj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 31234:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 31234
nc: connect to 172.20.43.129 port 31234 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:22.902: FAIL: Unexpected error:
    <*errors.errorString | 0xc002f3a110>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.43.129:31234 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.43.129:31234 over TCP protocol
occurred

... skipping 288 lines ...
• Failure [146.773 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:25:22.902: Unexpected error:
      <*errors.errorString | 0xc002f3a110>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.43.129:31234 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.43.129:31234 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":0,"skipped":0,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:28.695: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 100 lines ...
Oct 19 19:25:16.114: INFO: PersistentVolumeClaim pvc-kz6gl found but phase is Pending instead of Bound.
Oct 19 19:25:18.219: INFO: PersistentVolumeClaim pvc-kz6gl found and phase=Bound (14.877783063s)
Oct 19 19:25:18.219: INFO: Waiting up to 3m0s for PersistentVolume local-xx9sl to have phase Bound
Oct 19 19:25:18.325: INFO: PersistentVolume local-xx9sl found and phase=Bound (105.395924ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7hx9
STEP: Creating a pod to test subpath
Oct 19 19:25:18.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7hx9" in namespace "provisioning-8937" to be "Succeeded or Failed"
Oct 19 19:25:18.747: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.807263ms
Oct 19 19:25:20.854: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212380701s
Oct 19 19:25:22.964: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322493888s
Oct 19 19:25:25.071: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429071316s
Oct 19 19:25:27.177: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535679924s
Oct 19 19:25:29.284: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642187633s
STEP: Saw pod success
Oct 19 19:25:29.284: INFO: Pod "pod-subpath-test-preprovisionedpv-7hx9" satisfied condition "Succeeded or Failed"
Oct 19 19:25:29.391: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7hx9 container test-container-subpath-preprovisionedpv-7hx9: <nil>
STEP: delete the pod
Oct 19 19:25:29.607: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7hx9 to disappear
Oct 19 19:25:29.712: INFO: Pod pod-subpath-test-preprovisionedpv-7hx9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7hx9
Oct 19 19:25:29.712: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7hx9" in namespace "provisioning-8937"
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:25:32.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3118" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":1,"skipped":15,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:33.128: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 43 lines ...
• [SLOW TEST:151.920 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:33.731: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
Oct 19 19:25:15.905: INFO: PersistentVolumeClaim pvc-z8rcx found but phase is Pending instead of Bound.
Oct 19 19:25:18.016: INFO: PersistentVolumeClaim pvc-z8rcx found and phase=Bound (14.859569624s)
Oct 19 19:25:18.016: INFO: Waiting up to 3m0s for PersistentVolume local-cvxqz to have phase Bound
Oct 19 19:25:18.124: INFO: PersistentVolume local-cvxqz found and phase=Bound (108.418963ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tb9w
STEP: Creating a pod to test subpath
Oct 19 19:25:18.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tb9w" in namespace "provisioning-3974" to be "Succeeded or Failed"
Oct 19 19:25:18.551: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 106.211843ms
Oct 19 19:25:20.658: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212616981s
Oct 19 19:25:22.770: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324672799s
Oct 19 19:25:24.880: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435222506s
Oct 19 19:25:26.993: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.548271563s
STEP: Saw pod success
Oct 19 19:25:26.993: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w" satisfied condition "Succeeded or Failed"
Oct 19 19:25:27.099: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tb9w container test-container-subpath-preprovisionedpv-tb9w: <nil>
STEP: delete the pod
Oct 19 19:25:27.319: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tb9w to disappear
Oct 19 19:25:27.424: INFO: Pod pod-subpath-test-preprovisionedpv-tb9w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tb9w
Oct 19 19:25:27.425: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tb9w" in namespace "provisioning-3974"
STEP: Creating pod pod-subpath-test-preprovisionedpv-tb9w
STEP: Creating a pod to test subpath
Oct 19 19:25:27.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tb9w" in namespace "provisioning-3974" to be "Succeeded or Failed"
Oct 19 19:25:27.746: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 105.767323ms
Oct 19 19:25:29.853: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212777862s
Oct 19 19:25:31.964: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.323672801s
STEP: Saw pod success
Oct 19 19:25:31.964: INFO: Pod "pod-subpath-test-preprovisionedpv-tb9w" satisfied condition "Succeeded or Failed"
Oct 19 19:25:32.070: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tb9w container test-container-subpath-preprovisionedpv-tb9w: <nil>
STEP: delete the pod
Oct 19 19:25:32.290: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tb9w to disappear
Oct 19 19:25:32.396: INFO: Pod pod-subpath-test-preprovisionedpv-tb9w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tb9w
Oct 19 19:25:32.396: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tb9w" in namespace "provisioning-3974"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":26,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:25:34.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa" in namespace "projected-8125" to be "Succeeded or Failed"
Oct 19 19:25:34.894: INFO: Pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa": Phase="Pending", Reason="", readiness=false. Elapsed: 106.172203ms
Oct 19 19:25:37.001: INFO: Pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213472563s
Oct 19 19:25:39.127: INFO: Pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339460432s
Oct 19 19:25:41.237: INFO: Pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449500174s
STEP: Saw pod success
Oct 19 19:25:41.237: INFO: Pod "downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa" satisfied condition "Succeeded or Failed"
Oct 19 19:25:41.343: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa container client-container: <nil>
STEP: delete the pod
Oct 19 19:25:41.566: INFO: Waiting for pod downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa to disappear
Oct 19 19:25:41.672: INFO: Pod downwardapi-volume-581be2b2-7230-4218-9576-8fd46c7e75aa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.739 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:46.366: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Oct 19 19:25:41.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 19 19:25:42.548: INFO: Waiting up to 5m0s for pod "pod-1285c37d-faa5-4817-be17-1d96601d0864" in namespace "emptydir-9373" to be "Succeeded or Failed"
Oct 19 19:25:42.655: INFO: Pod "pod-1285c37d-faa5-4817-be17-1d96601d0864": Phase="Pending", Reason="", readiness=false. Elapsed: 106.235403ms
Oct 19 19:25:44.761: INFO: Pod "pod-1285c37d-faa5-4817-be17-1d96601d0864": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212445314s
Oct 19 19:25:46.868: INFO: Pod "pod-1285c37d-faa5-4817-be17-1d96601d0864": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319743335s
Oct 19 19:25:48.975: INFO: Pod "pod-1285c37d-faa5-4817-be17-1d96601d0864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42666176s
STEP: Saw pod success
Oct 19 19:25:48.975: INFO: Pod "pod-1285c37d-faa5-4817-be17-1d96601d0864" satisfied condition "Succeeded or Failed"
Oct 19 19:25:49.081: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-1285c37d-faa5-4817-be17-1d96601d0864 container test-container: <nil>
STEP: delete the pod
Oct 19 19:25:49.298: INFO: Waiting for pod pod-1285c37d-faa5-4817-be17-1d96601d0864 to disappear
Oct 19 19:25:49.404: INFO: Pod pod-1285c37d-faa5-4817-be17-1d96601d0864 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.708 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:25:49.642: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 43 lines ...
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
Oct 19 19:24:59.510: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:25:09.834: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:25:20.126: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:25:30.427: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:25:40.642: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:25:40.642: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 664 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:25:40.642: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:988
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":1,"skipped":9,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:6.620 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":12,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 56 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":19,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:31.511 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":2,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:05.385: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 38 lines ...
• [SLOW TEST:7.492 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [sig-node] kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:25:31.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 114 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":7,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:26:13.637: INFO: Waiting up to 5m0s for pod "metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b" in namespace "downward-api-7885" to be "Succeeded or Failed"
Oct 19 19:26:13.753: INFO: Pod "metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b": Phase="Pending", Reason="", readiness=false. Elapsed: 115.940823ms
Oct 19 19:26:15.861: INFO: Pod "metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22323681s
STEP: Saw pod success
Oct 19 19:26:15.861: INFO: Pod "metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b" satisfied condition "Succeeded or Failed"
Oct 19 19:26:15.966: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b container client-container: <nil>
STEP: delete the pod
Oct 19 19:26:16.181: INFO: Waiting for pod metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b to disappear
Oct 19 19:26:16.286: INFO: Pod metadata-volume-5b535ef6-0f95-46ef-bdfa-a9fdf935ce1b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:26:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7885" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 57 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-77212399-9a69-4d80-b19b-40000b9b0c15
STEP: Creating a pod to test consume configMaps
Oct 19 19:26:13.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a" in namespace "projected-8119" to be "Succeeded or Failed"
Oct 19 19:26:13.823: INFO: Pod "pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a": Phase="Pending", Reason="", readiness=false. Elapsed: 105.911494ms
Oct 19 19:26:15.929: INFO: Pod "pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212356461s
Oct 19 19:26:18.038: INFO: Pod "pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321444458s
STEP: Saw pod success
Oct 19 19:26:18.038: INFO: Pod "pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a" satisfied condition "Succeeded or Failed"
Oct 19 19:26:18.143: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:26:18.374: INFO: Waiting for pod pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a to disappear
Oct 19 19:26:18.479: INFO: Pod pod-projected-configmaps-398e9d56-3e8a-4906-ac7e-809f33ccc73a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.737 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:18.718: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 35 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSSS
------------------------------
{"msg":"FAILED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":26,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:55.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
Oct 19 19:25:06.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 19 19:25:08.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 19 19:25:10.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770268297, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 19 19:26:12.745: INFO: Waited 1m0.331819473s for the sample-apiserver to be ready to handle requests.
Oct 19 19:26:12.745: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"8c737f6a-9972-4b5b-8a0d-76c1cdafe334","resourceVersion":"8181","creationTimestamp":"2021-10-19T19:25:12Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-10-19T19:25:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2021-10-19T19:25:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},"spec":{"service":{"namespace":"aggregator-7665","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpFeE1ERTVNVGt5TkRVMldoY05NekV4TURFM01Ua3lORFUyV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUNsemI2WVBFWGVjd3kyNStDVzBuQUduMnZzU3BHenlubzl5SGF4V0VFc3hWSHIKVEJLZ3lFNldyS0g5ZDFnT0tvOUZTQVQvQmFzK3JWR3ErLy93ZndBM3NZclo5TmNtVWhMdkV2TXVLYWwvZTVwMwo3VTd5d0RyUjk0azdiM0Q4bnR1WG9kSkJGOXcxUi90cjRxVTdBbmY4WXpwOHJvdmc1VWlDRW5nZlhCeS9kV29hCkovamVCWjVaRWRvVGRFTDZLU3RObDlJOEJHbjFjMUNmMXk2ZmhRcGdWL29VTnk2cTRmdlhtNWNQN1phZTFISFgKUUJsc1lmT25vK05zQmFvODc5S0xMWlF2NUYwVEh0RXpaMTFyclZqTkF1RnBUN0NGRHI0Wm5XNTlGb0ZzM2VlVQpPQ0hjSXVIdHp5aS9yZGNOYTErb3JoZU5RV1hwajhKcWtQVnM0OUVmQWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUc2x1MHRjMzFKaWJoM1lPaW0KSVJSaWpkWkNQVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBcFpLb08xaHpnVWFNVnNwb3RVSmhQeVFhS2toOQpkdVNJOHNlY3NXKzBUS081VmtUSjR1ZlVMbEV5ZFBvMHVtbktGdTk2SW5xNmVuM0ZCeDZBOXcrK1dpMEx6emE3Cm03dHJFZjVhK2xaeWhrZTczV0lucTFKZWhPOGFsQnJVNHM4YTVJRWorNXljWW9rUFdIT3B3dXFtUklKYmdSMXEKcGlaU0ExTHRnRUpWUVpLcnpCblJORmVXOUtmV3FGZElyamgwM01NNDkxanhrcVczcDUxZkE0YVVhUm03N2NtdQowZGlKMVBEdVZjekhDdlBucWdtRlFDWE1FYTJEeHRRT1hWcE5pWjcwMHlnUGNOTyt6R3AyaDd6VlVnSC9pQU84ClIxUng5dzRYY1k1cCtucGlVUk13TTAyckpFaDBCOEFCSFFtd3pOai9tSWlqM2pacWpHYlFYSmU5eEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2021-10-19T19:25:12Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://100.70.222.97:7443/apis/wardle.example.com/v1alpha1: Get \"https://100.70.222.97:7443/apis/wardle.example.com/v1alpha1\": dial tcp 100.70.222.97:7443: i/o timeout"}]}}
Oct 19 19:26:12.747: INFO: current pods: {"metadata":{"resourceVersion":"8327"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-wd9m8","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-7665","uid":"df006531-e8fd-49d9-a002-ae218f592551","resourceVersion":"6769","creationTimestamp":"2021-10-19T19:24:57Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"98bb23d3-b8cd-44ed-94e0-48d7b5fb867a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-10-19T19:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98bb23d3-b8cd-44ed-94e0-48d7b5fb867a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-10-19T19:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.3.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-96mw9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-96mw9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-96mw9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-43-129.eu-west-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:24:57Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:25:09Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:25:09Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:24:57Z"}],"hostIP":"172.20.43.129","podIP":"100.96.3.59","podIPs":[{"ip":"100.96.3.59"}],"startTime":"2021-10-19T19:24:57Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2021-10-19T19:25:09Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://5b02b71678d70d83e43739b09bcfd221f7fd45a60ccf068a551d2a9ddeb0bb33","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2021-10-19T19:25:03Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://76cf372383384e8fce22cefd2eda7f16070f1a6f69d3c1e6456f93587f82b8e1","started":true}],"qosClass":"BestEffort"}}]}
Oct 19 19:26:12.853: INFO: logs of sample-apiserver-deployment-64f6b9dc99-wd9m8/sample-apiserver (error: <nil>): W1019 19:25:03.611581       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W1019 19:25:03.611657       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I1019 19:25:03.624799       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I1019 19:25:03.624822       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I1019 19:25:03.627664       1 client.go:361] parsed scheme: "endpoint"
I1019 19:25:03.627709       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1019 19:25:03.628108       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1019 19:25:04.118884       1 client.go:361] parsed scheme: "endpoint"
I1019 19:25:04.118979       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1019 19:25:04.119254       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:04.628570       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:05.124668       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:06.447797       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:06.454442       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:09.461616       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1019 19:25:09.476835       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1019 19:25:13.065466       1 client.go:361] parsed scheme: "endpoint"
I1019 19:25:13.065504       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 19:25:13.066788       1 client.go:361] parsed scheme: "endpoint"
I1019 19:25:13.066806       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 19:25:13.069665       1 client.go:361] parsed scheme: "endpoint"
I1019 19:25:13.069698       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
I1019 19:25:13.171406       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I1019 19:25:13.172297       1 secure_serving.go:178] Serving securely on [::]:443
I1019 19:25:13.172379       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I1019 19:25:13.270414       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I1019 19:25:13.271356       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

Oct 19 19:26:12.960: INFO: logs of sample-apiserver-deployment-64f6b9dc99-wd9m8/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-10-19 19:25:09.692508 I | etcdmain: etcd Version: 3.4.13
2021-10-19 19:25:09.692543 I | etcdmain: Git SHA: ae9734ed2
2021-10-19 19:25:09.692546 I | etcdmain: Go Version: go1.12.17
2021-10-19 19:25:09.692549 I | etcdmain: Go OS/Arch: linux/amd64
2021-10-19 19:25:09.692553 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2021-10-19 19:25:09.692559 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2021-10-19 19:25:10.104194 I | etcdserver: setting up the initial cluster version to 3.4
2021-10-19 19:25:10.104269 I | embed: ready to serve client requests
2021-10-19 19:25:10.105326 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2021-10-19 19:25:10.105941 N | etcdserver/membership: set the initial cluster version to 3.4
2021-10-19 19:25:10.106022 I | etcdserver/api: enabled capabilities for version 3.4

Oct 19 19:26:12.960: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0002b6250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 273 lines ...
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:26:12.960: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0002b6250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":1,"skipped":26,"failed":2,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":6,"skipped":33,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 72 lines ...
      Disabled temporarily, reopen after #73168 is fixed

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:26:17.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":22,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:29.847: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 197 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:30.805: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Oct 19 19:26:28.613: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3349" to be "Succeeded or Failed"
Oct 19 19:26:28.718: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 105.269074ms
Oct 19 19:26:30.825: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211580673s
Oct 19 19:26:32.935: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321530603s
Oct 19 19:26:32.935: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:26:33.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3349" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":7,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:33.271: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:26:20.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 91 lines ...
• [SLOW TEST:13.390 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:33.683: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 230 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":7,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:37.465: INFO: Only supported for providers [openstack] (not aws)
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 53 lines ...
Oct 19 19:24:46.248: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathhtzvx] to have phase Bound
Oct 19 19:24:46.355: INFO: PersistentVolumeClaim csi-hostpathhtzvx found but phase is Pending instead of Bound.
Oct 19 19:24:48.464: INFO: PersistentVolumeClaim csi-hostpathhtzvx found but phase is Pending instead of Bound.
Oct 19 19:24:50.574: INFO: PersistentVolumeClaim csi-hostpathhtzvx found and phase=Bound (4.325905215s)
STEP: Creating pod pod-subpath-test-dynamicpv-nxtq
STEP: Creating a pod to test subpath
Oct 19 19:24:50.899: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nxtq" in namespace "provisioning-1647" to be "Succeeded or Failed"
Oct 19 19:24:51.005: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 106.209863ms
Oct 19 19:24:53.113: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213886784s
Oct 19 19:24:55.220: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321044567s
Oct 19 19:24:57.327: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4281485s
Oct 19 19:24:59.434: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535392854s
Oct 19 19:25:01.541: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642652368s
Oct 19 19:25:03.649: INFO: Pod "pod-subpath-test-dynamicpv-nxtq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.750221163s
STEP: Saw pod success
Oct 19 19:25:03.649: INFO: Pod "pod-subpath-test-dynamicpv-nxtq" satisfied condition "Succeeded or Failed"
Oct 19 19:25:03.764: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-nxtq container test-container-volume-dynamicpv-nxtq: <nil>
STEP: delete the pod
Oct 19 19:25:03.996: INFO: Waiting for pod pod-subpath-test-dynamicpv-nxtq to disappear
Oct 19 19:25:04.102: INFO: Pod pod-subpath-test-dynamicpv-nxtq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nxtq
Oct 19 19:25:04.102: INFO: Deleting pod "pod-subpath-test-dynamicpv-nxtq" in namespace "provisioning-1647"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:45.013: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
Oct 19 19:24:32.694: INFO: stdout: "nodeport-update-service-xd7cq"
Oct 19 19:24:32.694: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.123.96 80'
Oct 19 19:24:33.836: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.123.96 80\nConnection to 100.68.123.96 80 port [tcp/http] succeeded!\n"
Oct 19 19:24:33.836: INFO: stdout: "nodeport-update-service-xd7cq"
Oct 19 19:24:33.836: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:36.989: INFO: rc: 1
Oct 19 19:24:36.989: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:37.989: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:41.126: INFO: rc: 1
Oct 19 19:24:41.127: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:41.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:45.392: INFO: rc: 1
Oct 19 19:24:45.393: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:45.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:49.134: INFO: rc: 1
Oct 19 19:24:49.134: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:49.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:53.111: INFO: rc: 1
Oct 19 19:24:53.111: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 30044
+ echo hostName
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:53.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:24:57.276: INFO: rc: 1
Oct 19 19:24:57.276: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:24:57.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:01.246: INFO: rc: 1
Oct 19 19:25:01.246: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:01.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:05.210: INFO: rc: 1
Oct 19 19:25:05.210: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:05.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:09.252: INFO: rc: 1
Oct 19 19:25:09.252: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 30044
+ echo hostName
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:09.989: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:13.211: INFO: rc: 1
Oct 19 19:25:13.211: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:13.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:17.116: INFO: rc: 1
Oct 19 19:25:17.116: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:17.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:21.118: INFO: rc: 1
Oct 19 19:25:21.118: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:21.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:25.228: INFO: rc: 1
Oct 19 19:25:25.228: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ + ncecho -v -t -w hostName 2
 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:25.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:29.114: INFO: rc: 1
Oct 19 19:25:29.114: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 30044
+ echo hostName
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:29.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:33.252: INFO: rc: 1
Oct 19 19:25:33.252: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:33.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:37.274: INFO: rc: 1
Oct 19 19:25:37.274: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:37.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:41.163: INFO: rc: 1
Oct 19 19:25:41.163: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.43.129 30044
+ echo hostName
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:41.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:45.127: INFO: rc: 1
Oct 19 19:25:45.127: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:45.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:49.129: INFO: rc: 1
Oct 19 19:25:49.129: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:49.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:53.185: INFO: rc: 1
Oct 19 19:25:53.185: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:53.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:25:57.336: INFO: rc: 1
Oct 19 19:25:57.337: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:57.991: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:01.171: INFO: rc: 1
Oct 19 19:26:01.171: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:01.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:05.166: INFO: rc: 1
Oct 19 19:26:05.166: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:05.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:09.133: INFO: rc: 1
Oct 19 19:26:09.134: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:09.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:13.256: INFO: rc: 1
Oct 19 19:26:13.257: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:13.989: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:17.113: INFO: rc: 1
Oct 19 19:26:17.114: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:17.989: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:21.152: INFO: rc: 1
Oct 19 19:26:21.152: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:21.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:25.192: INFO: rc: 1
Oct 19 19:26:25.192: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:25.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:29.176: INFO: rc: 1
Oct 19 19:26:29.176: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:29.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:33.180: INFO: rc: 1
Oct 19 19:26:33.180: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:33.990: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:37.178: INFO: rc: 1
Oct 19 19:26:37.178: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:37.178: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044'
Oct 19 19:26:40.352: INFO: rc: 1
Oct 19 19:26:40.353: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-176 exec execpodcgj8c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.43.129 30044:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.43.129 30044
nc: connect to 172.20.43.129 port 30044 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:40.353: FAIL: Unexpected error:
    <*errors.errorString | 0xc0040b8030>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.43.129:30044 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.43.129:30044 over TCP protocol
occurred

... skipping 276 lines ...
• Failure [153.747 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Oct 19 19:26:40.353: Unexpected error:
      <*errors.errorString | 0xc0040b8030>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.43.129:30044 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.43.129:30044 over TCP protocol
  occurred

... skipping 27 lines ...
• [SLOW TEST:7.610 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

S
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":3,"skipped":19,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:26:45.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-8e7b7885-fa55-464a-ae54-ecb51beac142
STEP: Creating secret with name secret-projected-all-test-volume-acb6fd42-2049-4baa-8051-feaa2be64074
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct 19 19:26:46.586: INFO: Waiting up to 5m0s for pod "projected-volume-f7155380-06ca-42cb-a26e-73a622f64320" in namespace "projected-4496" to be "Succeeded or Failed"
Oct 19 19:26:46.690: INFO: Pod "projected-volume-f7155380-06ca-42cb-a26e-73a622f64320": Phase="Pending", Reason="", readiness=false. Elapsed: 104.717793ms
Oct 19 19:26:48.796: INFO: Pod "projected-volume-f7155380-06ca-42cb-a26e-73a622f64320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210483236s
Oct 19 19:26:50.902: INFO: Pod "projected-volume-f7155380-06ca-42cb-a26e-73a622f64320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316752739s
STEP: Saw pod success
Oct 19 19:26:50.903: INFO: Pod "projected-volume-f7155380-06ca-42cb-a26e-73a622f64320" satisfied condition "Succeeded or Failed"
Oct 19 19:26:51.010: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod projected-volume-f7155380-06ca-42cb-a26e-73a622f64320 container projected-all-volume-test: <nil>
STEP: delete the pod
Oct 19 19:26:51.228: INFO: Waiting for pod projected-volume-f7155380-06ca-42cb-a26e-73a622f64320 to disappear
Oct 19 19:26:51.333: INFO: Pod projected-volume-f7155380-06ca-42cb-a26e-73a622f64320 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.802 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:51.571: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:26:52.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8067" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":5,"skipped":24,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:52.773: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 19 19:26:50.215: INFO: Waiting up to 5m0s for pod "pod-2e5d9d45-42cb-408a-94a3-6449eb655e97" in namespace "emptydir-5660" to be "Succeeded or Failed"
Oct 19 19:26:50.322: INFO: Pod "pod-2e5d9d45-42cb-408a-94a3-6449eb655e97": Phase="Pending", Reason="", readiness=false. Elapsed: 106.119064ms
Oct 19 19:26:52.431: INFO: Pod "pod-2e5d9d45-42cb-408a-94a3-6449eb655e97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.215714186s
STEP: Saw pod success
Oct 19 19:26:52.431: INFO: Pod "pod-2e5d9d45-42cb-408a-94a3-6449eb655e97" satisfied condition "Succeeded or Failed"
Oct 19 19:26:52.537: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-2e5d9d45-42cb-408a-94a3-6449eb655e97 container test-container: <nil>
STEP: delete the pod
Oct 19 19:26:52.757: INFO: Waiting for pod pod-2e5d9d45-42cb-408a-94a3-6449eb655e97 to disappear
Oct 19 19:26:52.863: INFO: Pod pod-2e5d9d45-42cb-408a-94a3-6449eb655e97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":10,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:496
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":3,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:54.331: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":3,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:55.469: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
STEP: Creating a mutating webhook configuration
Oct 19 19:26:05.058: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:26:15.373: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:26:25.675: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:26:35.974: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:26:46.188: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:26:46.189: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000242260>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 521 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:26:46.189: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000242260>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":37,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

S
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":6,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:26:37.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:26:56.823: INFO: Only supported for providers [openstack] (not aws)
... skipping 39 lines ...
• [SLOW TEST:5.731 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:27:00.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:02.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-426" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":5,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:02.735: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 158 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-a46ab8e3-8570-41ac-9535-b2ccec2caa8c
STEP: Creating a pod to test consume configMaps
Oct 19 19:26:56.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf" in namespace "projected-8947" to be "Succeeded or Failed"
Oct 19 19:26:56.369: INFO: Pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 104.986294ms
Oct 19 19:26:58.474: INFO: Pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210041908s
Oct 19 19:27:00.581: INFO: Pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316627742s
Oct 19 19:27:02.687: INFO: Pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.422231167s
STEP: Saw pod success
Oct 19 19:27:02.687: INFO: Pod "pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf" satisfied condition "Succeeded or Failed"
Oct 19 19:27:02.792: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:27:03.008: INFO: Waiting for pod pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf to disappear
Oct 19 19:27:03.113: INFO: Pod pod-projected-configmaps-c9d4b14a-b8c0-41f6-af19-6fedc2a13fbf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.811 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:03.369: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":7,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:26:53.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Oct 19 19:26:53.625: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 19 19:26:53.841: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5642" in namespace "provisioning-5642" to be "Succeeded or Failed"
Oct 19 19:26:53.947: INFO: Pod "hostpath-symlink-prep-provisioning-5642": Phase="Pending", Reason="", readiness=false. Elapsed: 105.983234ms
Oct 19 19:26:56.054: INFO: Pod "hostpath-symlink-prep-provisioning-5642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212944068s
STEP: Saw pod success
Oct 19 19:26:56.054: INFO: Pod "hostpath-symlink-prep-provisioning-5642" satisfied condition "Succeeded or Failed"
Oct 19 19:26:56.054: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5642" in namespace "provisioning-5642"
Oct 19 19:26:56.163: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5642" to be fully deleted
Oct 19 19:26:56.271: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fjvm
STEP: Creating a pod to test subpath
Oct 19 19:26:56.378: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fjvm" in namespace "provisioning-5642" to be "Succeeded or Failed"
Oct 19 19:26:56.485: INFO: Pod "pod-subpath-test-inlinevolume-fjvm": Phase="Pending", Reason="", readiness=false. Elapsed: 107.160054ms
Oct 19 19:26:58.593: INFO: Pod "pod-subpath-test-inlinevolume-fjvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214529258s
Oct 19 19:27:00.699: INFO: Pod "pod-subpath-test-inlinevolume-fjvm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321298332s
STEP: Saw pod success
Oct 19 19:27:00.700: INFO: Pod "pod-subpath-test-inlinevolume-fjvm" satisfied condition "Succeeded or Failed"
Oct 19 19:27:00.805: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-fjvm container test-container-subpath-inlinevolume-fjvm: <nil>
STEP: delete the pod
Oct 19 19:27:01.030: INFO: Waiting for pod pod-subpath-test-inlinevolume-fjvm to disappear
Oct 19 19:27:01.138: INFO: Pod pod-subpath-test-inlinevolume-fjvm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fjvm
Oct 19 19:27:01.138: INFO: Deleting pod "pod-subpath-test-inlinevolume-fjvm" in namespace "provisioning-5642"
STEP: Deleting pod
Oct 19 19:27:01.245: INFO: Deleting pod "pod-subpath-test-inlinevolume-fjvm" in namespace "provisioning-5642"
Oct 19 19:27:01.460: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5642" in namespace "provisioning-5642" to be "Succeeded or Failed"
Oct 19 19:27:01.566: INFO: Pod "hostpath-symlink-prep-provisioning-5642": Phase="Pending", Reason="", readiness=false. Elapsed: 105.947054ms
Oct 19 19:27:03.673: INFO: Pod "hostpath-symlink-prep-provisioning-5642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212576338s
STEP: Saw pod success
Oct 19 19:27:03.673: INFO: Pod "hostpath-symlink-prep-provisioning-5642" satisfied condition "Succeeded or Failed"
Oct 19 19:27:03.673: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5642" in namespace "provisioning-5642"
Oct 19 19:27:03.783: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5642" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:03.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5642" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":19,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 9 lines ...
Oct 19 19:26:34.272: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-2792f2bp7
STEP: creating a claim
Oct 19 19:26:34.378: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Oct 19 19:26:34.589: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 19 19:26:34.802: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:37.016: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:39.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:41.015: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:43.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:45.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:47.017: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:49.013: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:51.015: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:53.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:55.022: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:57.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:26:59.014: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:27:01.013: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:27:03.017: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:27:05.017: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2792f2bp7",
  	... // 2 identical fields
  }

Oct 19 19:27:05.230: INFO: Error updating pvc awsgj2p9: PersistentVolumeClaim "awsgj2p9" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:10.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-9491" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:27:05.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-6c605b0e-88cb-44b1-ba1d-4977eebe183e
STEP: Creating a pod to test consume configMaps
Oct 19 19:27:06.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4" in namespace "projected-5562" to be "Succeeded or Failed"
Oct 19 19:27:06.664: INFO: Pod "pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 105.365784ms
Oct 19 19:27:08.769: INFO: Pod "pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21104348s
Oct 19 19:27:10.876: INFO: Pod "pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317681405s
STEP: Saw pod success
Oct 19 19:27:10.876: INFO: Pod "pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4" satisfied condition "Succeeded or Failed"
Oct 19 19:27:10.996: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4 container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:27:11.224: INFO: Waiting for pod pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4 to disappear
Oct 19 19:27:11.333: INFO: Pod pod-projected-configmaps-4755d363-01f5-4cf6-8cbb-29f3d98df6e4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.803 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:11.648: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 137 lines ...
Oct 19 19:25:06.292: INFO: stdout: "nodeport-test-fsfx4"
Oct 19 19:25:06.292: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.55.71 30403'
Oct 19 19:25:07.487: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.55.71 30403\nConnection to 172.20.55.71 30403 port [tcp/*] succeeded!\n"
Oct 19 19:25:07.488: INFO: stdout: "nodeport-test-b8x48"
Oct 19 19:25:07.488: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:10.688: INFO: rc: 1
Oct 19 19:25:10.688: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:11.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:14.897: INFO: rc: 1
Oct 19 19:25:14.897: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:15.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:18.829: INFO: rc: 1
Oct 19 19:25:18.829: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:19.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:22.870: INFO: rc: 1
Oct 19 19:25:22.870: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:23.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:26.952: INFO: rc: 1
Oct 19 19:25:26.952: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:27.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:30.861: INFO: rc: 1
Oct 19 19:25:30.861: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:31.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:34.859: INFO: rc: 1
Oct 19 19:25:34.859: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.35.5 30403
+ echo hostName
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:35.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:38.825: INFO: rc: 1
Oct 19 19:25:38.825: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:39.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:42.839: INFO: rc: 1
Oct 19 19:25:42.839: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:43.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:46.827: INFO: rc: 1
Oct 19 19:25:46.827: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:47.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:50.971: INFO: rc: 1
Oct 19 19:25:50.971: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ nc -v+  -t -wecho 2 hostName 172.20.35.5
 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:51.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:54.828: INFO: rc: 1
Oct 19 19:25:54.828: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:55.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:25:59.099: INFO: rc: 1
Oct 19 19:25:59.100: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:59.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:02.819: INFO: rc: 1
Oct 19 19:26:02.819: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:03.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:06.838: INFO: rc: 1
Oct 19 19:26:06.838: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:07.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:10.949: INFO: rc: 1
Oct 19 19:26:10.949: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:11.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:14.842: INFO: rc: 1
Oct 19 19:26:14.842: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:15.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:18.813: INFO: rc: 1
Oct 19 19:26:18.813: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:19.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:22.843: INFO: rc: 1
Oct 19 19:26:22.843: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:23.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:26.814: INFO: rc: 1
Oct 19 19:26:26.814: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:27.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:30.823: INFO: rc: 1
Oct 19 19:26:30.823: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:31.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:34.852: INFO: rc: 1
Oct 19 19:26:34.852: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:35.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:38.825: INFO: rc: 1
Oct 19 19:26:38.825: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:39.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:42.840: INFO: rc: 1
Oct 19 19:26:42.840: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:43.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:46.822: INFO: rc: 1
Oct 19 19:26:46.822: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:47.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:50.984: INFO: rc: 1
Oct 19 19:26:50.985: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:51.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:54.862: INFO: rc: 1
Oct 19 19:26:54.862: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:55.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:26:58.825: INFO: rc: 1
Oct 19 19:26:58.825: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:59.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:27:02.855: INFO: rc: 1
Oct 19 19:27:02.855: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:03.689: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:27:06.962: INFO: rc: 1
Oct 19 19:27:06.962: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.35.5 30403
+ echo hostName
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:07.688: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:27:10.859: INFO: rc: 1
Oct 19 19:27:10.859: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:10.859: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403'
Oct 19 19:27:14.016: INFO: rc: 1
Oct 19 19:27:14.016: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8394 exec execpodbt5mn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30403:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30403
nc: connect to 172.20.35.5 port 30403 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:14.016: FAIL: Unexpected error:
    <*errors.errorString | 0xc00345c030>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30403 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30403 over TCP protocol
occurred

... skipping 279 lines ...
• Failure [150.692 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:27:14.016: Unexpected error:
      <*errors.errorString | 0xc00345c030>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30403 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30403 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169
------------------------------
{"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":3,"skipped":19,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:19.063: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:20.806: INFO: Driver local doesn't support ext4 -- skipping
... skipping 39 lines ...
Oct 19 19:27:16.279: INFO: PersistentVolumeClaim pvc-mpd6p found but phase is Pending instead of Bound.
Oct 19 19:27:18.410: INFO: PersistentVolumeClaim pvc-mpd6p found and phase=Bound (4.343186765s)
Oct 19 19:27:18.410: INFO: Waiting up to 3m0s for PersistentVolume local-gbb4t to have phase Bound
Oct 19 19:27:18.516: INFO: PersistentVolume local-gbb4t found and phase=Bound (106.290674ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sv6n
STEP: Creating a pod to test subpath
Oct 19 19:27:18.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sv6n" in namespace "provisioning-4697" to be "Succeeded or Failed"
Oct 19 19:27:18.944: INFO: Pod "pod-subpath-test-preprovisionedpv-sv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 107.047064ms
Oct 19 19:27:21.053: INFO: Pod "pod-subpath-test-preprovisionedpv-sv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216265181s
Oct 19 19:27:23.163: INFO: Pod "pod-subpath-test-preprovisionedpv-sv6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325515998s
STEP: Saw pod success
Oct 19 19:27:23.163: INFO: Pod "pod-subpath-test-preprovisionedpv-sv6n" satisfied condition "Succeeded or Failed"
Oct 19 19:27:23.269: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-sv6n container test-container-subpath-preprovisionedpv-sv6n: <nil>
STEP: delete the pod
Oct 19 19:27:23.488: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sv6n to disappear
Oct 19 19:27:23.594: INFO: Pod pod-subpath-test-preprovisionedpv-sv6n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sv6n
Oct 19 19:27:23.594: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sv6n" in namespace "provisioning-4697"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":9,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:25.157: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:26.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-981" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":10,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:26.790: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct 19 19:27:27.344: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 19 19:27:27.344: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-mfmm
STEP: Creating a pod to test exec-volume-test
Oct 19 19:27:27.452: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-mfmm" in namespace "volume-4906" to be "Succeeded or Failed"
Oct 19 19:27:27.559: INFO: Pod "exec-volume-test-inlinevolume-mfmm": Phase="Pending", Reason="", readiness=false. Elapsed: 106.285414ms
Oct 19 19:27:29.665: INFO: Pod "exec-volume-test-inlinevolume-mfmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212843163s
STEP: Saw pod success
Oct 19 19:27:29.665: INFO: Pod "exec-volume-test-inlinevolume-mfmm" satisfied condition "Succeeded or Failed"
Oct 19 19:27:29.771: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod exec-volume-test-inlinevolume-mfmm container exec-container-inlinevolume-mfmm: <nil>
STEP: delete the pod
Oct 19 19:27:29.992: INFO: Waiting for pod exec-volume-test-inlinevolume-mfmm to disappear
Oct 19 19:27:30.098: INFO: Pod exec-volume-test-inlinevolume-mfmm no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-mfmm
Oct 19 19:27:30.098: INFO: Deleting pod "exec-volume-test-inlinevolume-mfmm" in namespace "volume-4906"
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:30.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4906" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:30.446: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 69 lines ...
Oct 19 19:26:45.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Oct 19 19:26:45.652: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 19 19:26:45.869: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9992" in namespace "volume-9992" to be "Succeeded or Failed"
Oct 19 19:26:45.975: INFO: Pod "hostpath-symlink-prep-volume-9992": Phase="Pending", Reason="", readiness=false. Elapsed: 106.442533ms
Oct 19 19:26:48.083: INFO: Pod "hostpath-symlink-prep-volume-9992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213716596s
STEP: Saw pod success
Oct 19 19:26:48.083: INFO: Pod "hostpath-symlink-prep-volume-9992" satisfied condition "Succeeded or Failed"
Oct 19 19:26:48.083: INFO: Deleting pod "hostpath-symlink-prep-volume-9992" in namespace "volume-9992"
Oct 19 19:26:48.194: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9992" to be fully deleted
Oct 19 19:26:48.302: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Oct 19 19:26:52.625: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-9992 exec hostpathsymlink-injector --namespace=volume-9992 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-9992' > /opt/0/index.html'
... skipping 48 lines ...
Oct 19 19:27:26.091: INFO: Pod hostpathsymlink-client still exists
Oct 19 19:27:27.984: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct 19 19:27:28.091: INFO: Pod hostpathsymlink-client still exists
Oct 19 19:27:29.985: INFO: Waiting for pod hostpathsymlink-client to disappear
Oct 19 19:27:30.092: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Oct 19 19:27:30.203: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9992" in namespace "volume-9992" to be "Succeeded or Failed"
Oct 19 19:27:30.311: INFO: Pod "hostpath-symlink-prep-volume-9992": Phase="Pending", Reason="", readiness=false. Elapsed: 107.815594ms
Oct 19 19:27:32.419: INFO: Pod "hostpath-symlink-prep-volume-9992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.215821523s
STEP: Saw pod success
Oct 19 19:27:32.420: INFO: Pod "hostpath-symlink-prep-volume-9992" satisfied condition "Succeeded or Failed"
Oct 19 19:27:32.420: INFO: Deleting pod "hostpath-symlink-prep-volume-9992" in namespace "volume-9992"
Oct 19 19:27:32.531: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9992" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:32.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9992" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:32.876: INFO: Only supported for providers [openstack] (not aws)
... skipping 81 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:24:20.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
Oct 19 19:24:24.634: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-865
Oct 19 19:24:24.741: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-865
Oct 19 19:24:24.848: INFO: creating *v1.StatefulSet: csi-mock-volumes-865-9014/csi-mockplugin
Oct 19 19:24:24.955: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-865
Oct 19 19:24:25.062: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-865"
Oct 19 19:24:25.169: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-865 to register on node ip-172-20-52-34.eu-west-1.compute.internal
I1019 19:24:33.770384    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1019 19:24:33.876615    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-865","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1019 19:24:33.985124    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1019 19:24:34.091029    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1019 19:24:34.339580    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-865","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1019 19:24:35.321834    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-865"},"Error":"","FullError":null}
STEP: Creating pod
Oct 19 19:24:42.165: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct 19 19:24:42.277: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kc8s8] to have phase Bound
I1019 19:24:42.284721    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Oct 19 19:24:42.389: INFO: PersistentVolumeClaim pvc-kc8s8 found but phase is Pending instead of Bound.
I1019 19:24:42.392730    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5"}}},"Error":"","FullError":null}
Oct 19 19:24:44.496: INFO: PersistentVolumeClaim pvc-kc8s8 found and phase=Bound (2.219161932s)
I1019 19:24:44.951841    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 19 19:24:45.059: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:24:45.934227    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5","storage.kubernetes.io/csiProvisionerIdentity":"1634671474154-8081-csi-mock-csi-mock-volumes-865"}},"Response":{},"Error":"","FullError":null}
I1019 19:24:46.826079    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 19 19:24:46.942: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:24:47.743: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:24:48.472: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:24:49.182562    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5/globalmount","target_path":"/var/lib/kubelet/pods/167671e3-ac66-4f62-9808-47e8129ea82b/volumes/kubernetes.io~csi/pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5","storage.kubernetes.io/csiProvisionerIdentity":"1634671474154-8081-csi-mock-csi-mock-volumes-865"}},"Response":{},"Error":"","FullError":null}
Oct 19 19:24:57.029: INFO: Deleting pod "pvc-volume-tester-fwvhc" in namespace "csi-mock-volumes-865"
Oct 19 19:24:57.137: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fwvhc" to be fully deleted
Oct 19 19:24:59.241: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:25:00.040849    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/167671e3-ac66-4f62-9808-47e8129ea82b/volumes/kubernetes.io~csi/pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5/mount"},"Response":{},"Error":"","FullError":null}
I1019 19:25:00.251195    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1019 19:25:00.362222    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5/globalmount"},"Response":{},"Error":"","FullError":null}
I1019 19:25:07.471505    5442 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct 19 19:25:08.459: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kc8s8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-865", SelfLink:"", UID:"a0b076e3-35bb-4fda-9894-2c53a5ed27c5", ResourceVersion:"5809", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268282, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004321308), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004321320)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00432efe0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00432f020), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:25:08.459: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kc8s8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-865", SelfLink:"", UID:"a0b076e3-35bb-4fda-9894-2c53a5ed27c5", ResourceVersion:"5810", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268282, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-865"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e3e900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e3e918)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e3e930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e3e948)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0028f9310), VolumeMode:(*v1.PersistentVolumeMode)(0xc0028f9320), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:25:08.459: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kc8s8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-865", SelfLink:"", UID:"a0b076e3-35bb-4fda-9894-2c53a5ed27c5", ResourceVersion:"5816", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268282, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-865"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a623d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a623f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a62408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a62420)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5", StorageClassName:(*string)(0xc000773060), VolumeMode:(*v1.PersistentVolumeMode)(0xc0007730e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:25:08.460: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kc8s8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-865", SelfLink:"", UID:"a0b076e3-35bb-4fda-9894-2c53a5ed27c5", ResourceVersion:"5817", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268282, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-865"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a62450), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a62468)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a62480), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a62498)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5", StorageClassName:(*string)(0xc000773290), VolumeMode:(*v1.PersistentVolumeMode)(0xc0007732d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:25:08.460: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-kc8s8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-865", SelfLink:"", UID:"a0b076e3-35bb-4fda-9894-2c53a5ed27c5", ResourceVersion:"6656", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268282, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc002a624c8), DeletionGracePeriodSeconds:(*int64)(0xc002b09f28), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-865"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a624e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a624f8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a62510), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a62528)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0b076e3-35bb-4fda-9894-2c53a5ed27c5", StorageClassName:(*string)(0xc000773420), VolumeMode:(*v1.PersistentVolumeMode)(0xc000773460), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":4,"skipped":16,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:34.567: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 83 lines ...
Oct 19 19:27:30.993: INFO: PersistentVolumeClaim pvc-tztnx found but phase is Pending instead of Bound.
Oct 19 19:27:33.102: INFO: PersistentVolumeClaim pvc-tztnx found and phase=Bound (4.32116681s)
Oct 19 19:27:33.102: INFO: Waiting up to 3m0s for PersistentVolume local-hdbpw to have phase Bound
Oct 19 19:27:33.211: INFO: PersistentVolume local-hdbpw found and phase=Bound (108.947665ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-cvgs
STEP: Creating a pod to test exec-volume-test
Oct 19 19:27:33.530: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cvgs" in namespace "volume-4449" to be "Succeeded or Failed"
Oct 19 19:27:33.636: INFO: Pod "exec-volume-test-preprovisionedpv-cvgs": Phase="Pending", Reason="", readiness=false. Elapsed: 106.140534ms
Oct 19 19:27:35.743: INFO: Pod "exec-volume-test-preprovisionedpv-cvgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213359194s
STEP: Saw pod success
Oct 19 19:27:35.744: INFO: Pod "exec-volume-test-preprovisionedpv-cvgs" satisfied condition "Succeeded or Failed"
Oct 19 19:27:35.849: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-cvgs container exec-container-preprovisionedpv-cvgs: <nil>
STEP: delete the pod
Oct 19 19:27:36.072: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cvgs to disappear
Oct 19 19:27:36.178: INFO: Pod exec-volume-test-preprovisionedpv-cvgs no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-cvgs
Oct 19 19:27:36.178: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cvgs" in namespace "volume-4449"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:37.634: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 92 lines ...
Oct 19 19:25:21.643: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Oct 19 19:25:21.643: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.68.108.237 80'
Oct 19 19:25:22.775: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.68.108.237 80\nConnection to 100.68.108.237 80 port [tcp/http] succeeded!\n"
Oct 19 19:25:22.775: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Oct 19 19:25:22.775: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:25.931: INFO: rc: 1
Oct 19 19:25:25.932: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:26.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:30.077: INFO: rc: 1
Oct 19 19:25:30.078: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:30.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:34.092: INFO: rc: 1
Oct 19 19:25:34.093: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:34.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:38.198: INFO: rc: 1
Oct 19 19:25:38.199: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:38.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:42.174: INFO: rc: 1
Oct 19 19:25:42.174: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.35.5 30290
+ echo hostName
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:42.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:46.068: INFO: rc: 1
Oct 19 19:25:46.069: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:46.933: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:50.067: INFO: rc: 1
Oct 19 19:25:50.067: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:50.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:54.168: INFO: rc: 1
Oct 19 19:25:54.169: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:54.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:25:58.229: INFO: rc: 1
Oct 19 19:25:58.229: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:25:58.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:02.064: INFO: rc: 1
Oct 19 19:26:02.064: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:02.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:06.091: INFO: rc: 1
Oct 19 19:26:06.091: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:06.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:10.102: INFO: rc: 1
Oct 19 19:26:10.102: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:10.933: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:14.076: INFO: rc: 1
Oct 19 19:26:14.076: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.35.5 30290
+ echo hostName
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:14.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:18.119: INFO: rc: 1
Oct 19 19:26:18.119: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:18.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:22.079: INFO: rc: 1
Oct 19 19:26:22.079: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:22.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:26.131: INFO: rc: 1
Oct 19 19:26:26.131: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:26.933: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:30.087: INFO: rc: 1
Oct 19 19:26:30.087: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:30.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:34.076: INFO: rc: 1
Oct 19 19:26:34.076: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:34.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:38.066: INFO: rc: 1
Oct 19 19:26:38.066: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:38.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:42.082: INFO: rc: 1
Oct 19 19:26:42.082: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:42.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:46.054: INFO: rc: 1
Oct 19 19:26:46.054: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:46.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:50.072: INFO: rc: 1
Oct 19 19:26:50.072: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:50.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:54.108: INFO: rc: 1
Oct 19 19:26:54.108: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:54.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:26:58.095: INFO: rc: 1
Oct 19 19:26:58.095: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:26:58.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:02.088: INFO: rc: 1
Oct 19 19:27:02.088: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:02.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:06.104: INFO: rc: 1
Oct 19 19:27:06.104: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:06.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:10.091: INFO: rc: 1
Oct 19 19:27:10.091: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:10.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:14.109: INFO: rc: 1
Oct 19 19:27:14.109: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:14.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:18.111: INFO: rc: 1
Oct 19 19:27:18.111: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:18.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:22.065: INFO: rc: 1
Oct 19 19:27:22.065: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:22.932: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:26.086: INFO: rc: 1
Oct 19 19:27:26.086: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:26.086: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290'
Oct 19 19:27:29.225: INFO: rc: 1
Oct 19 19:27:29.225: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4789 exec execpod-affinityhd4s6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.35.5 30290:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.35.5 30290
nc: connect to 172.20.35.5 port 30290 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:27:29.226: FAIL: Unexpected error:
    <*errors.errorString | 0xc0026943e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30290 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30290 over TCP protocol
occurred

... skipping 297 lines ...
• Failure [162.933 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:27:29.226: Unexpected error:
      <*errors.errorString | 0xc0026943e0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30290 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 172.20.35.5:30290 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":55,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:43.924: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
Oct 19 19:27:15.701: INFO: PersistentVolumeClaim pvc-gthfc found but phase is Pending instead of Bound.
Oct 19 19:27:17.813: INFO: PersistentVolumeClaim pvc-gthfc found and phase=Bound (2.21653588s)
Oct 19 19:27:17.813: INFO: Waiting up to 3m0s for PersistentVolume local-glnbq to have phase Bound
Oct 19 19:27:17.919: INFO: PersistentVolume local-glnbq found and phase=Bound (105.673744ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sgb8
STEP: Creating a pod to test atomic-volume-subpath
Oct 19 19:27:18.238: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sgb8" in namespace "provisioning-4197" to be "Succeeded or Failed"
Oct 19 19:27:18.344: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Pending", Reason="", readiness=false. Elapsed: 105.196033ms
Oct 19 19:27:20.455: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21671012s
Oct 19 19:27:22.561: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322385528s
Oct 19 19:27:24.669: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430643185s
Oct 19 19:27:26.775: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 8.537025963s
Oct 19 19:27:28.882: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 10.643402471s
Oct 19 19:27:30.989: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 12.75085197s
Oct 19 19:27:33.095: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 14.856549619s
Oct 19 19:27:35.200: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 16.962082468s
Oct 19 19:27:37.307: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 19.068967938s
Oct 19 19:27:39.414: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Running", Reason="", readiness=true. Elapsed: 21.175689478s
Oct 19 19:27:41.569: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.331003195s
STEP: Saw pod success
Oct 19 19:27:41.569: INFO: Pod "pod-subpath-test-preprovisionedpv-sgb8" satisfied condition "Succeeded or Failed"
Oct 19 19:27:41.675: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-sgb8 container test-container-subpath-preprovisionedpv-sgb8: <nil>
STEP: delete the pod
Oct 19 19:27:41.894: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sgb8 to disappear
Oct 19 19:27:41.999: INFO: Pod pod-subpath-test-preprovisionedpv-sgb8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sgb8
Oct 19 19:27:41.999: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sgb8" in namespace "provisioning-4197"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:44.220: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:45.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2437" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":58,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:45.792: INFO: Driver local doesn't support ext4 -- skipping
... skipping 60 lines ...
• [SLOW TEST:52.333 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":3,"skipped":13,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:45.946: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-9e2b6612-9c9a-419d-b6cf-fb977c4f2cd8
STEP: Creating a pod to test consume secrets
Oct 19 19:27:45.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5" in namespace "projected-4652" to be "Succeeded or Failed"
Oct 19 19:27:45.106: INFO: Pod "pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5": Phase="Pending", Reason="", readiness=false. Elapsed: 105.192964ms
Oct 19 19:27:47.213: INFO: Pod "pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211298885s
STEP: Saw pod success
Oct 19 19:27:47.213: INFO: Pod "pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5" satisfied condition "Succeeded or Failed"
Oct 19 19:27:47.318: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:27:47.537: INFO: Waiting for pod pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5 to disappear
Oct 19 19:27:47.645: INFO: Pod pod-projected-secrets-5e2337e1-769a-41fe-b6a3-1d09687e99d5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:27:47.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4652" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:47.872: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":63,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:27:47.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":70,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:27:53.396: INFO: Driver emptydir doesn't support ext4 -- skipping
... skipping 209 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":12,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":11,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Oct 19 19:26:34.012: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1929
Oct 19 19:26:34.120: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1929
Oct 19 19:26:34.228: INFO: creating *v1.StatefulSet: csi-mock-volumes-1929-4316/csi-mockplugin
Oct 19 19:26:34.337: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1929
Oct 19 19:26:34.451: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1929"
Oct 19 19:26:34.558: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1929 to register on node ip-172-20-35-5.eu-west-1.compute.internal
I1019 19:26:38.751061    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1019 19:26:38.856687    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1929","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1019 19:26:38.968265    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1019 19:26:39.095495    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1019 19:26:39.351539    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1929","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1019 19:26:40.059370    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1929"},"Error":"","FullError":null}
STEP: Creating pod
Oct 19 19:26:44.705: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1019 19:26:44.946038    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1019 19:26:45.062982    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e"}}},"Error":"","FullError":null}
I1019 19:26:46.149448    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 19 19:26:46.256: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:26:47.065085    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e","storage.kubernetes.io/csiProvisionerIdentity":"1634671599151-8081-csi-mock-csi-mock-volumes-1929"}},"Response":{},"Error":"","FullError":null}
I1019 19:26:47.827374    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 19 19:26:47.934: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:26:48.658: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:26:49.455: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:26:50.233840    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e/globalmount","target_path":"/var/lib/kubelet/pods/0c95c8db-4271-435c-841b-96010913d9a9/volumes/kubernetes.io~csi/pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e","storage.kubernetes.io/csiProvisionerIdentity":"1634671599151-8081-csi-mock-csi-mock-volumes-1929"}},"Response":{},"Error":"","FullError":null}
Oct 19 19:26:53.137: INFO: Deleting pod "pvc-volume-tester-dm7cj" in namespace "csi-mock-volumes-1929"
Oct 19 19:26:53.247: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dm7cj" to be fully deleted
Oct 19 19:26:55.047: INFO: >>> kubeConfig: /root/.kube/config
I1019 19:26:55.773895    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0c95c8db-4271-435c-841b-96010913d9a9/volumes/kubernetes.io~csi/pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e/mount"},"Response":{},"Error":"","FullError":null}
I1019 19:26:55.905605    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1019 19:26:56.014476    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e/globalmount"},"Response":{},"Error":"","FullError":null}
I1019 19:27:03.586972    5510 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct 19 19:27:04.578: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bqhb8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1929", SelfLink:"", UID:"c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", ResourceVersion:"9224", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268404, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00207ae58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00207ae70)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0043e5e70), VolumeMode:(*v1.PersistentVolumeMode)(0xc0043e5e80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:27:04.578: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bqhb8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1929", SelfLink:"", UID:"c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", ResourceVersion:"9226", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268404, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-35-5.eu-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00207b200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00207b218)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00207b230), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00207b248)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0034181a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034181b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:27:04.578: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bqhb8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1929", SelfLink:"", UID:"c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", ResourceVersion:"9227", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268404, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1929", "volume.kubernetes.io/selected-node":"ip-172-20-35-5.eu-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ca3ec0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ca3ed8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ca3ef0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ca3f08)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ca3f20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ca3f38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001eee4f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001eee500), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:27:04.578: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bqhb8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1929", SelfLink:"", UID:"c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", ResourceVersion:"9230", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268404, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1929", "volume.kubernetes.io/selected-node":"ip-172-20-35-5.eu-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152d80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152d98)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152db0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152dc8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152df8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", StorageClassName:(*string)(0xc002316d80), VolumeMode:(*v1.PersistentVolumeMode)(0xc002316d90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 19 19:27:04.579: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bqhb8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-1929", SelfLink:"", UID:"c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", ResourceVersion:"9231", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770268404, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-1929", "volume.kubernetes.io/selected-node":"ip-172-20-35-5.eu-west-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152e28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152e40)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152e58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152e70)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001152e88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001152ea0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c8496517-2aac-44d9-ae9b-d64aa7a9cc3e", StorageClassName:(*string)(0xc002316de0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002316df0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":5,"skipped":28,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:28:06.620: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 171 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications with PVCs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:286
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":3,"skipped":26,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:28:08.086: INFO: Only supported for providers [azure] (not aws)
... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:28:08.117: INFO: Only supported for providers [azure] (not aws)
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:28:07.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9176" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":6,"skipped":33,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:28:08.232: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7852-crds.webhook.example.com via the AdmissionRegistration API
Oct 19 19:27:16.100: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:27:26.415: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:27:36.750: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:27:47.015: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:27:57.229: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:27:57.230: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000242260>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 62128 lines ...
• Failure [338.274 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015

  Oct 19 19:44:51.301: Unexpected error:
      <*errors.errorString | 0xc004c62060>: {
          s: "service verification failed for: 100.69.145.202\nexpected [up-down-1-2kw4s up-down-1-g6vkm up-down-1-kwnfm]\nreceived [up-down-1-g6vkm wget: download timed out]",
      }
      service verification failed for: 100.69.145.202
      expected [up-down-1-2kw4s up-down-1-g6vkm up-down-1-kwnfm]
      received [up-down-1-g6vkm wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1031
------------------------------
... skipping 24 lines ...
Oct 19 19:44:45.411: INFO: PersistentVolumeClaim pvc-l7hfq found but phase is Pending instead of Bound.
Oct 19 19:44:47.519: INFO: PersistentVolumeClaim pvc-l7hfq found and phase=Bound (4.319311644s)
Oct 19 19:44:47.519: INFO: Waiting up to 3m0s for PersistentVolume local-62rnf to have phase Bound
Oct 19 19:44:47.624: INFO: PersistentVolume local-62rnf found and phase=Bound (105.602846ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cfsn
STEP: Creating a pod to test subpath
Oct 19 19:44:47.947: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cfsn" in namespace "provisioning-656" to be "Succeeded or Failed"
Oct 19 19:44:48.053: INFO: Pod "pod-subpath-test-preprovisionedpv-cfsn": Phase="Pending", Reason="", readiness=false. Elapsed: 105.850686ms
Oct 19 19:44:50.159: INFO: Pod "pod-subpath-test-preprovisionedpv-cfsn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211807888s
Oct 19 19:44:52.264: INFO: Pod "pod-subpath-test-preprovisionedpv-cfsn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317578399s
STEP: Saw pod success
Oct 19 19:44:52.265: INFO: Pod "pod-subpath-test-preprovisionedpv-cfsn" satisfied condition "Succeeded or Failed"
Oct 19 19:44:52.370: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-cfsn container test-container-subpath-preprovisionedpv-cfsn: <nil>
STEP: delete the pod
Oct 19 19:44:52.590: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cfsn to disappear
Oct 19 19:44:52.696: INFO: Pod pod-subpath-test-preprovisionedpv-cfsn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cfsn
Oct 19 19:44:52.696: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cfsn" in namespace "provisioning-656"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":34,"skipped":207,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:44:56.474: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-2cac4e9d-e4bd-4129-95f2-cfb912fda3d1
STEP: Creating a pod to test consume secrets
Oct 19 19:44:55.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c" in namespace "projected-5491" to be "Succeeded or Failed"
Oct 19 19:44:55.292: INFO: Pod "pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c": Phase="Pending", Reason="", readiness=false. Elapsed: 105.057587ms
Oct 19 19:44:57.398: INFO: Pod "pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210451435s
STEP: Saw pod success
Oct 19 19:44:57.398: INFO: Pod "pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c" satisfied condition "Succeeded or Failed"
Oct 19 19:44:57.503: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:44:57.718: INFO: Waiting for pod pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c to disappear
Oct 19 19:44:57.823: INFO: Pod pod-projected-secrets-32b1f82e-390e-423f-ac1b-d4dc5faa208c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 29 lines ...
Oct 19 19:44:46.580: INFO: PersistentVolumeClaim pvc-9q6gg found but phase is Pending instead of Bound.
Oct 19 19:44:48.689: INFO: PersistentVolumeClaim pvc-9q6gg found and phase=Bound (14.868403845s)
Oct 19 19:44:48.689: INFO: Waiting up to 3m0s for PersistentVolume local-5dcbf to have phase Bound
Oct 19 19:44:48.797: INFO: PersistentVolume local-5dcbf found and phase=Bound (107.505498ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7w4r
STEP: Creating a pod to test subpath
Oct 19 19:44:49.122: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7w4r" in namespace "provisioning-8443" to be "Succeeded or Failed"
Oct 19 19:44:49.229: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r": Phase="Pending", Reason="", readiness=false. Elapsed: 107.658888ms
Oct 19 19:44:51.339: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21691698s
Oct 19 19:44:53.446: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324807232s
STEP: Saw pod success
Oct 19 19:44:53.447: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r" satisfied condition "Succeeded or Failed"
Oct 19 19:44:53.554: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7w4r container test-container-subpath-preprovisionedpv-7w4r: <nil>
STEP: delete the pod
Oct 19 19:44:53.775: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7w4r to disappear
Oct 19 19:44:53.882: INFO: Pod pod-subpath-test-preprovisionedpv-7w4r no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7w4r
Oct 19 19:44:53.882: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7w4r" in namespace "provisioning-8443"
STEP: Creating pod pod-subpath-test-preprovisionedpv-7w4r
STEP: Creating a pod to test subpath
Oct 19 19:44:54.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7w4r" in namespace "provisioning-8443" to be "Succeeded or Failed"
Oct 19 19:44:54.206: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r": Phase="Pending", Reason="", readiness=false. Elapsed: 107.807487ms
Oct 19 19:44:56.315: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21686482s
STEP: Saw pod success
Oct 19 19:44:56.315: INFO: Pod "pod-subpath-test-preprovisionedpv-7w4r" satisfied condition "Succeeded or Failed"
Oct 19 19:44:56.422: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7w4r container test-container-subpath-preprovisionedpv-7w4r: <nil>
STEP: delete the pod
Oct 19 19:44:56.650: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7w4r to disappear
Oct 19 19:44:56.757: INFO: Pod pod-subpath-test-preprovisionedpv-7w4r no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7w4r
Oct 19 19:44:56.758: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7w4r" in namespace "provisioning-8443"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":33,"skipped":172,"failed":2,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:44:58.248: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":16,"skipped":132,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:44:56.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:44:58.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1294" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":17,"skipped":132,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:44:57.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b" in namespace "downward-api-4904" to be "Succeeded or Failed"
Oct 19 19:44:57.265: INFO: Pod "downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 105.713458ms
Oct 19 19:44:59.371: INFO: Pod "downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211853931s
STEP: Saw pod success
Oct 19 19:44:59.371: INFO: Pod "downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b" satisfied condition "Succeeded or Failed"
Oct 19 19:44:59.477: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b container client-container: <nil>
STEP: delete the pod
Oct 19 19:44:59.696: INFO: Waiting for pod downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b to disappear
Oct 19 19:44:59.802: INFO: Pod downwardapi-volume-fc6860c9-f811-4e5a-b4b9-c976b7a73e8b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:44:59.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4904" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":213,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:00.045: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 184 lines ...
      We don't set fsGroup on block device, skipped.

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":76,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:43:46.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
• [SLOW TEST:86.770 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":76,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:12.920: INFO: Only supported for providers [vsphere] (not aws)
... skipping 46 lines ...
• [SLOW TEST:18.250 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":36,"skipped":232,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:18.492: INFO: Only supported for providers [gce gke] (not aws)
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":37,"skipped":267,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:19.148: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":17,"skipped":110,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:20.334: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
Oct 19 19:44:13.246: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-8019pd4f      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-801    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8019pd4f,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-801    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8019pd4f,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-801    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8019pd4f,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-8019pd4f    6d276a4a-63af-4f19-9a4a-a51c223cb697 36830 0 2021-10-19 19:44:13 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-10-19 19:44:13 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-mbz2m pvc- provisioning-801  5764ea65-901f-4161-9a93-bbf95e8c14a6 36831 0 2021-10-19 19:44:13 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-10-19 19:44:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-8019pd4f,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-b94e449e-3660-4de3-b422-18b7959fc2e1 in namespace provisioning-801
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct 19 19:44:32.433: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-zk4xz" in namespace "provisioning-801" to be "Succeeded or Failed"
Oct 19 19:44:32.539: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 105.183447ms
Oct 19 19:44:34.646: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212004958s
Oct 19 19:44:36.751: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317826348s
Oct 19 19:44:38.858: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423949898s
Oct 19 19:44:40.964: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53025263s
Oct 19 19:44:43.071: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637794502s
Oct 19 19:44:45.178: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.744750445s
Oct 19 19:44:47.284: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.850637588s
Oct 19 19:44:49.392: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958479868s
Oct 19 19:44:51.499: INFO: Pod "pvc-volume-tester-writer-zk4xz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.065047089s
STEP: Saw pod success
Oct 19 19:44:51.499: INFO: Pod "pvc-volume-tester-writer-zk4xz" satisfied condition "Succeeded or Failed"
Oct 19 19:44:51.711: INFO: Pod pvc-volume-tester-writer-zk4xz has the following logs: 
Oct 19 19:44:51.711: INFO: Deleting pod "pvc-volume-tester-writer-zk4xz" in namespace "provisioning-801"
Oct 19 19:44:51.819: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-zk4xz" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-35-5.eu-west-1.compute.internal"
Oct 19 19:44:52.243: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-6ncs4" in namespace "provisioning-801" to be "Succeeded or Failed"
Oct 19 19:44:52.348: INFO: Pod "pvc-volume-tester-reader-6ncs4": Phase="Pending", Reason="", readiness=false. Elapsed: 105.272516ms
Oct 19 19:44:54.458: INFO: Pod "pvc-volume-tester-reader-6ncs4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214632027s
Oct 19 19:44:56.564: INFO: Pod "pvc-volume-tester-reader-6ncs4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320929542s
Oct 19 19:44:58.671: INFO: Pod "pvc-volume-tester-reader-6ncs4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427920296s
STEP: Saw pod success
Oct 19 19:44:58.671: INFO: Pod "pvc-volume-tester-reader-6ncs4" satisfied condition "Succeeded or Failed"
Oct 19 19:44:58.885: INFO: Pod pvc-volume-tester-reader-6ncs4 has the following logs: hello world

Oct 19 19:44:58.885: INFO: Deleting pod "pvc-volume-tester-reader-6ncs4" in namespace "provisioning-801"
Oct 19 19:44:58.995: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-6ncs4" to be fully deleted
Oct 19 19:44:59.101: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mbz2m] to have phase Bound
Oct 19 19:44:59.206: INFO: PersistentVolumeClaim pvc-mbz2m found and phase=Bound (104.919007ms)
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":37,"skipped":262,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":26,"skipped":181,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:20.688: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:20.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":280,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:21.010: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 129 lines ...
• [SLOW TEST:60.962 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":155,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:22.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8923" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":244,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:45:20.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct 19 19:45:21.142: INFO: Waiting up to 5m0s for pod "security-context-84fbb552-409b-4919-ad2a-55cc7882bc38" in namespace "security-context-8509" to be "Succeeded or Failed"
Oct 19 19:45:21.247: INFO: Pod "security-context-84fbb552-409b-4919-ad2a-55cc7882bc38": Phase="Pending", Reason="", readiness=false. Elapsed: 104.976037ms
Oct 19 19:45:23.353: INFO: Pod "security-context-84fbb552-409b-4919-ad2a-55cc7882bc38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21177398s
STEP: Saw pod success
Oct 19 19:45:23.353: INFO: Pod "security-context-84fbb552-409b-4919-ad2a-55cc7882bc38" satisfied condition "Succeeded or Failed"
Oct 19 19:45:23.463: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod security-context-84fbb552-409b-4919-ad2a-55cc7882bc38 container test-container: <nil>
STEP: delete the pod
Oct 19 19:45:23.681: INFO: Waiting for pod security-context-84fbb552-409b-4919-ad2a-55cc7882bc38 to disappear
Oct 19 19:45:23.786: INFO: Pod security-context-84fbb552-409b-4919-ad2a-55cc7882bc38 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:23.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8509" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":38,"skipped":263,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:24.015: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 5 lines ...
Oct 19 19:45:20.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Oct 19 19:45:21.435: INFO: Waiting up to 5m0s for pod "var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96" in namespace "var-expansion-9729" to be "Succeeded or Failed"
Oct 19 19:45:21.542: INFO: Pod "var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96": Phase="Pending", Reason="", readiness=false. Elapsed: 106.699977ms
Oct 19 19:45:23.649: INFO: Pod "var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213578759s
STEP: Saw pod success
Oct 19 19:45:23.649: INFO: Pod "var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96" satisfied condition "Succeeded or Failed"
Oct 19 19:45:23.756: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96 container dapi-container: <nil>
STEP: delete the pod
Oct 19 19:45:23.975: INFO: Waiting for pod var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96 to disappear
Oct 19 19:45:24.086: INFO: Pod var-expansion-fe2ce80d-5d70-4063-8cce-015ae161ce96 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:24.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9729" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":27,"skipped":192,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:24.332: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 27 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:24.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":28,"skipped":196,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:24.689: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct 19 19:45:21.851: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3444" to be "Succeeded or Failed"
Oct 19 19:45:21.959: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 107.856947ms
Oct 19 19:45:24.068: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.216992517s
STEP: Saw pod success
Oct 19 19:45:24.069: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct 19 19:45:24.177: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct 19 19:45:24.403: INFO: Waiting for pod pod-host-path-test to disappear
Oct 19 19:45:24.511: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:24.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3444" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":227,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:44:58.051: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Oct 19 19:45:16.861: INFO: PersistentVolumeClaim pvc-j8kzw found but phase is Pending instead of Bound.
Oct 19 19:45:18.967: INFO: PersistentVolumeClaim pvc-j8kzw found and phase=Bound (14.846372956s)
Oct 19 19:45:18.967: INFO: Waiting up to 3m0s for PersistentVolume local-r877w to have phase Bound
Oct 19 19:45:19.071: INFO: PersistentVolume local-r877w found and phase=Bound (104.639096ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9xkx
STEP: Creating a pod to test subpath
Oct 19 19:45:19.387: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9xkx" in namespace "provisioning-7260" to be "Succeeded or Failed"
Oct 19 19:45:19.493: INFO: Pod "pod-subpath-test-preprovisionedpv-9xkx": Phase="Pending", Reason="", readiness=false. Elapsed: 105.368597ms
Oct 19 19:45:21.635: INFO: Pod "pod-subpath-test-preprovisionedpv-9xkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247350527s
Oct 19 19:45:23.740: INFO: Pod "pod-subpath-test-preprovisionedpv-9xkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.352903558s
STEP: Saw pod success
Oct 19 19:45:23.740: INFO: Pod "pod-subpath-test-preprovisionedpv-9xkx" satisfied condition "Succeeded or Failed"
Oct 19 19:45:23.845: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9xkx container test-container-subpath-preprovisionedpv-9xkx: <nil>
STEP: delete the pod
Oct 19 19:45:24.063: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9xkx to disappear
Oct 19 19:45:24.169: INFO: Pod pod-subpath-test-preprovisionedpv-9xkx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9xkx
Oct 19 19:45:24.169: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9xkx" in namespace "provisioning-7260"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":34,"skipped":227,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":39,"skipped":302,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:45:24.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:25.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6236" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":40,"skipped":302,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:25.729: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:25.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":35,"skipped":234,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:45:22.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 19 19:45:23.456: INFO: Waiting up to 5m0s for pod "security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58" in namespace "security-context-1529" to be "Succeeded or Failed"
Oct 19 19:45:23.563: INFO: Pod "security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58": Phase="Pending", Reason="", readiness=false. Elapsed: 106.394977ms
Oct 19 19:45:25.669: INFO: Pod "security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212834479s
STEP: Saw pod success
Oct 19 19:45:25.669: INFO: Pod "security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58" satisfied condition "Succeeded or Failed"
Oct 19 19:45:25.776: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58 container test-container: <nil>
STEP: delete the pod
Oct 19 19:45:25.991: INFO: Waiting for pod security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58 to disappear
Oct 19 19:45:26.097: INFO: Pod security-context-9aaaab3a-af98-4fef-8901-2fd6ac9eee58 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:26.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-1529" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":38,"skipped":247,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:26.360: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Oct 19 19:45:20.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
Oct 19 19:45:20.896: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 19 19:45:21.111: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9390" in namespace "provisioning-9390" to be "Succeeded or Failed"
Oct 19 19:45:21.217: INFO: Pod "hostpath-symlink-prep-provisioning-9390": Phase="Pending", Reason="", readiness=false. Elapsed: 105.751747ms
Oct 19 19:45:23.323: INFO: Pod "hostpath-symlink-prep-provisioning-9390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212450779s
STEP: Saw pod success
Oct 19 19:45:23.324: INFO: Pod "hostpath-symlink-prep-provisioning-9390" satisfied condition "Succeeded or Failed"
Oct 19 19:45:23.324: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9390" in namespace "provisioning-9390"
Oct 19 19:45:23.432: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9390" to be fully deleted
Oct 19 19:45:23.540: INFO: Creating resource for inline volume
Oct 19 19:45:23.540: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct 19 19:45:23.541: INFO: Deleting pod "pod-subpath-test-inlinevolume-n5h6" in namespace "provisioning-9390"
Oct 19 19:45:23.754: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9390" in namespace "provisioning-9390" to be "Succeeded or Failed"
Oct 19 19:45:23.860: INFO: Pod "hostpath-symlink-prep-provisioning-9390": Phase="Pending", Reason="", readiness=false. Elapsed: 105.888007ms
Oct 19 19:45:25.966: INFO: Pod "hostpath-symlink-prep-provisioning-9390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212413519s
STEP: Saw pod success
Oct 19 19:45:25.966: INFO: Pod "hostpath-symlink-prep-provisioning-9390" satisfied condition "Succeeded or Failed"
Oct 19 19:45:25.966: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9390" in namespace "provisioning-9390"
Oct 19 19:45:26.079: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9390" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:26.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9390" for this suite.
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:26.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-232" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":41,"skipped":311,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:26.689: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:26.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":42,"skipped":314,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:27.069: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-17438872-62b4-4ea2-bd0c-538f5553f184
STEP: Creating a pod to test consume configMaps
Oct 19 19:45:26.810: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4" in namespace "configmap-1209" to be "Succeeded or Failed"
Oct 19 19:45:26.915: INFO: Pod "pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4": Phase="Pending", Reason="", readiness=false. Elapsed: 104.996987ms
Oct 19 19:45:29.020: INFO: Pod "pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21033378s
STEP: Saw pod success
Oct 19 19:45:29.020: INFO: Pod "pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4" satisfied condition "Succeeded or Failed"
Oct 19 19:45:29.126: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4 container agnhost-container: <nil>
STEP: delete the pod
Oct 19 19:45:29.344: INFO: Waiting for pod pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4 to disappear
Oct 19 19:45:29.450: INFO: Pod pod-configmaps-1cb34443-5e19-42e4-962f-6192296e5de4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:29.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1209" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":236,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:29.727: INFO: Only supported for providers [azure] (not aws)
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":21,"skipped":170,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:32.690: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
Oct 19 19:45:31.181: INFO: PersistentVolumeClaim pvc-kmmtz found but phase is Pending instead of Bound.
Oct 19 19:45:33.295: INFO: PersistentVolumeClaim pvc-kmmtz found and phase=Bound (2.22211023s)
Oct 19 19:45:33.295: INFO: Waiting up to 3m0s for PersistentVolume local-97nm8 to have phase Bound
Oct 19 19:45:33.403: INFO: PersistentVolume local-97nm8 found and phase=Bound (107.964786ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-hvtl
STEP: Creating a pod to test exec-volume-test
Oct 19 19:45:33.729: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-hvtl" in namespace "volume-1911" to be "Succeeded or Failed"
Oct 19 19:45:33.837: INFO: Pod "exec-volume-test-preprovisionedpv-hvtl": Phase="Pending", Reason="", readiness=false. Elapsed: 108.042889ms
Oct 19 19:45:35.946: INFO: Pod "exec-volume-test-preprovisionedpv-hvtl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216761671s
Oct 19 19:45:38.054: INFO: Pod "exec-volume-test-preprovisionedpv-hvtl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325328606s
STEP: Saw pod success
Oct 19 19:45:38.054: INFO: Pod "exec-volume-test-preprovisionedpv-hvtl" satisfied condition "Succeeded or Failed"
Oct 19 19:45:38.163: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-hvtl container exec-container-preprovisionedpv-hvtl: <nil>
STEP: delete the pod
Oct 19 19:45:38.388: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hvtl to disappear
Oct 19 19:45:38.496: INFO: Pod exec-volume-test-preprovisionedpv-hvtl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-hvtl
Oct 19 19:45:38.496: INFO: Deleting pod "exec-volume-test-preprovisionedpv-hvtl" in namespace "volume-1911"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":43,"skipped":317,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:12.080 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":37,"skipped":248,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:41.860: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 58 lines ...
Oct 19 19:45:16.878: INFO: PersistentVolumeClaim pvc-4jgph found but phase is Pending instead of Bound.
Oct 19 19:45:18.987: INFO: PersistentVolumeClaim pvc-4jgph found and phase=Bound (4.322170812s)
Oct 19 19:45:18.987: INFO: Waiting up to 3m0s for PersistentVolume aws-pxflc to have phase Bound
Oct 19 19:45:19.094: INFO: PersistentVolume aws-pxflc found and phase=Bound (106.249927ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-tlbl
STEP: Creating a pod to test exec-volume-test
Oct 19 19:45:19.414: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-tlbl" in namespace "volume-8882" to be "Succeeded or Failed"
Oct 19 19:45:19.520: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Pending", Reason="", readiness=false. Elapsed: 106.535087ms
Oct 19 19:45:21.638: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223890798s
Oct 19 19:45:23.744: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330773938s
Oct 19 19:45:25.852: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43801118s
Oct 19 19:45:27.960: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546016334s
Oct 19 19:45:30.068: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.653910785s
STEP: Saw pod success
Oct 19 19:45:30.068: INFO: Pod "exec-volume-test-preprovisionedpv-tlbl" satisfied condition "Succeeded or Failed"
Oct 19 19:45:30.174: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-tlbl container exec-container-preprovisionedpv-tlbl: <nil>
STEP: delete the pod
Oct 19 19:45:30.394: INFO: Waiting for pod exec-volume-test-preprovisionedpv-tlbl to disappear
Oct 19 19:45:30.500: INFO: Pod exec-volume-test-preprovisionedpv-tlbl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-tlbl
Oct 19 19:45:30.501: INFO: Deleting pod "exec-volume-test-preprovisionedpv-tlbl" in namespace "volume-8882"
STEP: Deleting pv and pvc
Oct 19 19:45:30.608: INFO: Deleting PersistentVolumeClaim "pvc-4jgph"
Oct 19 19:45:30.715: INFO: Deleting PersistentVolume "aws-pxflc"
Oct 19 19:45:31.048: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0b40571295f5861f9", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b40571295f5861f9 is currently attached to i-069822d95ce9600b4
	status code: 400, request id: c03e75f4-867d-4a2d-b73f-ef7ca641bcdf
Oct 19 19:45:36.683: INFO: Couldn't delete PD "aws://eu-west-1a/vol-0b40571295f5861f9", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b40571295f5861f9 is currently attached to i-069822d95ce9600b4
	status code: 400, request id: ed7ff7fb-cf69-4b54-af96-bc97a882f579
Oct 19 19:45:42.373: INFO: Successfully deleted PD "aws://eu-west-1a/vol-0b40571295f5861f9".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:45:42.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8882" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":16,"skipped":78,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:42.612: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":18,"skipped":117,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:43.373: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 562 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":44,"skipped":322,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":27,"skipped":157,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:50.447: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":19,"skipped":130,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Oct 19 19:45:46.650: INFO: PersistentVolumeClaim pvc-x6dx5 found but phase is Pending instead of Bound.
Oct 19 19:45:48.758: INFO: PersistentVolumeClaim pvc-x6dx5 found and phase=Bound (2.215361594s)
Oct 19 19:45:48.759: INFO: Waiting up to 3m0s for PersistentVolume local-dsbpq to have phase Bound
Oct 19 19:45:48.865: INFO: PersistentVolume local-dsbpq found and phase=Bound (106.812256ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4pn5
STEP: Creating a pod to test subpath
Oct 19 19:45:49.188: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4pn5" in namespace "provisioning-885" to be "Succeeded or Failed"
Oct 19 19:45:49.295: INFO: Pod "pod-subpath-test-preprovisionedpv-4pn5": Phase="Pending", Reason="", readiness=false. Elapsed: 106.631317ms
Oct 19 19:45:51.406: INFO: Pod "pod-subpath-test-preprovisionedpv-4pn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217376811s
Oct 19 19:45:53.513: INFO: Pod "pod-subpath-test-preprovisionedpv-4pn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.324293935s
STEP: Saw pod success
Oct 19 19:45:53.513: INFO: Pod "pod-subpath-test-preprovisionedpv-4pn5" satisfied condition "Succeeded or Failed"
Oct 19 19:45:53.623: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-4pn5 container test-container-subpath-preprovisionedpv-4pn5: <nil>
STEP: delete the pod
Oct 19 19:45:53.850: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4pn5 to disappear
Oct 19 19:45:53.957: INFO: Pod pod-subpath-test-preprovisionedpv-4pn5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4pn5
Oct 19 19:45:53.957: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4pn5" in namespace "provisioning-885"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":17,"skipped":81,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:55.476: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":23,"skipped":204,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:32:58.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 19 19:33:33.633: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:34:03.739: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:34:03.739: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:34:38.846: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:35:08.952: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:35:08.952: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:35:43.846: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:36:13.953: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:36:13.953: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:36:48.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:37:18.954: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:37:18.954: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:37:53.846: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:38:23.953: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:38:23.953: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:38:58.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:39:28.957: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:39:28.957: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:40:03.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:40:33.953: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:40:33.953: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:41:08.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:41:38.953: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:41:38.953: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:42:13.848: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:42:43.954: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:42:43.954: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:43:18.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:43:48.954: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:43:48.954: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:44:23.847: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:44:53.953: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:44:53.953: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:45:24.060: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:45:54.166: INFO: Unable to read jessie_udp@dns-test-service-3.dns-587.svc.cluster.local from pod dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: the server is currently unable to handle the request (get pods dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87)
Oct 19 19:45:54.166: INFO: Lookups using dns-587/dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87 failed for: [wheezy_udp@dns-test-service-3.dns-587.svc.cluster.local jessie_udp@dns-test-service-3.dns-587.svc.cluster.local]

Oct 19 19:45:54.167: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 25 lines ...
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Created: Created container querier
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Started: Started container querier
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4" already present on machine
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Created: Created container jessie-querier
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:33:00 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Started: Started container jessie-querier
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:34:02 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} BackOff: Back-off restarting failed container
Oct 19 19:45:54.495: INFO: At 2021-10-19 19:34:03 +0000 UTC - event for dns-test-1a99074d-dd68-410b-82c7-5e1fb5e87e87: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} BackOff: Back-off restarting failed container
Oct 19 19:45:54.601: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 19 19:45:54.601: INFO: 
Oct 19 19:45:54.707: INFO: 
Logging node info for node ip-172-20-35-5.eu-west-1.compute.internal
Oct 19 19:45:54.813: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-35-5.eu-west-1.compute.internal    ca6c3f72-9fb4-4a15-a840-4e073992298a 39995 0 2021-10-19 19:19:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kops.k8s.io/instancegroup:nodes-eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-35-5.eu-west-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-35-5.eu-west-1.compute.internal topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7269":"csi-mock-csi-mock-volumes-7269"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-19 19:19:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kube-controller-manager Update v1 2021-10-19 19:40:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}} {kubelet Update v1 2021-10-19 19:45:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-094ea4cd3ef7af828,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-19 19:45:05 +0000 UTC,LastTransitionTime:2021-10-19 19:19:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-19 19:45:05 +0000 UTC,LastTransitionTime:2021-10-19 19:19:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-19 19:45:05 +0000 UTC,LastTransitionTime:2021-10-19 19:19:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-19 19:45:05 +0000 UTC,LastTransitionTime:2021-10-19 19:19:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.5,},NodeAddress{Type:ExternalIP,Address:34.245.165.72,},NodeAddress{Type:Hostname,Address:ip-172-20-35-5.eu-west-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-35-5.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-245-165-72.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d8ec38cfdc1a50c539c5a052a0c59,SystemUUID:ec2d8ec3-8cfd-c1a5-0c53-9c5a052a0c59,BootID:8674677a-941d-46d4-a73a-1b819d5ed008,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[docker.io/library/nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 docker.io/library/nginx:latest],SizeBytes:53792768,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890 k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-9866^747cf9aa-3114-11ec-96f4-56b77b4f0577],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-9866^747cf9aa-3114-11ec-96f4-56b77b4f0577,DevicePath:,},},Config:nil,},}
Oct 19 19:45:54.814: INFO: 
... skipping 206 lines ...
• Failure [780.470 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:45:54.167: Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":23,"skipped":204,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:45:58.945: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":85,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:00.611: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 62 lines ...
• [SLOW TEST:243.524 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":18,"skipped":198,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:00.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 19 19:46:01.307: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1ec99c77-6c53-49ef-9444-cb8191604822" in namespace "security-context-test-2096" to be "Succeeded or Failed"
Oct 19 19:46:01.413: INFO: Pod "busybox-readonly-false-1ec99c77-6c53-49ef-9444-cb8191604822": Phase="Pending", Reason="", readiness=false. Elapsed: 106.508677ms
Oct 19 19:46:03.521: INFO: Pod "busybox-readonly-false-1ec99c77-6c53-49ef-9444-cb8191604822": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213699348s
Oct 19 19:46:03.521: INFO: Pod "busybox-readonly-false-1ec99c77-6c53-49ef-9444-cb8191604822" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:03.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2096" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":93,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":28,"skipped":164,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:05.788: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:07.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1377" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":29,"skipped":173,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:08.036: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 190 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":39,"skipped":278,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:09.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-f6a9a8b2-3511-4c7e-9006-ee15678a3794
STEP: Creating a pod to test consume secrets
Oct 19 19:46:10.638: INFO: Waiting up to 5m0s for pod "pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4" in namespace "secrets-1887" to be "Succeeded or Failed"
Oct 19 19:46:10.744: INFO: Pod "pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4": Phase="Pending", Reason="", readiness=false. Elapsed: 105.960407ms
Oct 19 19:46:12.851: INFO: Pod "pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.2123964s
STEP: Saw pod success
Oct 19 19:46:12.851: INFO: Pod "pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4" satisfied condition "Succeeded or Failed"
Oct 19 19:46:12.957: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4 container secret-env-test: <nil>
STEP: delete the pod
Oct 19 19:46:13.176: INFO: Waiting for pod pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4 to disappear
Oct 19 19:46:13.281: INFO: Pod pod-secrets-0950e1b5-51f5-42e0-a2c4-b4389573d2c4 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:13.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1887" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":279,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:13.538: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 179 lines ...
Oct 19 19:46:03.413: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:05.413: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:07.413: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:09.412: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:11.412: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:11.518: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = false)
Oct 19 19:46:11.519: FAIL: Unexpected error:
    <*errors.errorString | 0xc0001c4250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 21 lines ...
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Created: Created container agnhost-container
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Started: Started container agnhost-container
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:11 +0000 UTC - event for pod-with-poststart-http-hook: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-2255/pod-with-poststart-http-hook to ip-172-20-52-34.eu-west-1.compute.internal
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:11 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:11 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Created: Created container pod-with-poststart-http-hook
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:11 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Started: Started container pod-with-poststart-http-hook
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:41 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} FailedPostStartHook: HTTP lifecycle hook (/echo?msg=poststart) for Container "pod-with-poststart-http-hook" in Pod "pod-with-poststart-http-hook_container-lifecycle-hook-2255(5415ae4f-bba0-4a6d-be5f-662dfa141674)" failed - error: Get "http://100.96.4.217:8080//echo?msg=poststart": dial tcp 100.96.4.217:8080: i/o timeout, message: ""
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:41:41 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Killing: FailedPostStartHook
Oct 19 19:46:11.626: INFO: At 2021-10-19 19:42:13 +0000 UTC - event for pod-with-poststart-http-hook: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} BackOff: Back-off restarting failed container
Oct 19 19:46:11.732: INFO: POD                           NODE                                        PHASE    GRACE  CONDITIONS
Oct 19 19:46:11.732: INFO: pod-handle-http-request       ip-172-20-55-71.eu-west-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:06 +0000 UTC  }]
Oct 19 19:46:11.732: INFO: pod-with-poststart-http-hook  ip-172-20-52-34.eu-west-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:11 +0000 UTC ContainersNotReady containers with unready status: [pod-with-poststart-http-hook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:11 +0000 UTC ContainersNotReady containers with unready status: [pod-with-poststart-http-hook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:11 +0000 UTC  }]
Oct 19 19:46:11.732: INFO: 
Oct 19 19:46:11.839: INFO: 
Logging node info for node ip-172-20-35-5.eu-west-1.compute.internal
... skipping 188 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 19 19:46:11.519: Unexpected error:
        <*errors.errorString | 0xc0001c4250>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103
------------------------------
{"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":120,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:16.217: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-k2kz
STEP: Creating a pod to test atomic-volume-subpath
Oct 19 19:45:54.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k2kz" in namespace "subpath-238" to be "Succeeded or Failed"
Oct 19 19:45:54.550: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Pending", Reason="", readiness=false. Elapsed: 105.979678ms
Oct 19 19:45:56.657: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 2.21239439s
Oct 19 19:45:58.764: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 4.319501926s
Oct 19 19:46:00.871: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 6.426839508s
Oct 19 19:46:02.978: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 8.533338439s
Oct 19 19:46:05.085: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 10.641041194s
Oct 19 19:46:07.193: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 12.748746907s
Oct 19 19:46:09.300: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 14.855523118s
Oct 19 19:46:11.407: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 16.9625781s
Oct 19 19:46:13.514: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Running", Reason="", readiness=true. Elapsed: 19.069524273s
Oct 19 19:46:15.620: INFO: Pod "pod-subpath-test-configmap-k2kz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.175763815s
STEP: Saw pod success
Oct 19 19:46:15.620: INFO: Pod "pod-subpath-test-configmap-k2kz" satisfied condition "Succeeded or Failed"
Oct 19 19:46:15.726: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-configmap-k2kz container test-container-subpath-configmap-k2kz: <nil>
STEP: delete the pod
Oct 19 19:46:15.945: INFO: Waiting for pod pod-subpath-test-configmap-k2kz to disappear
Oct 19 19:46:16.051: INFO: Pod pod-subpath-test-configmap-k2kz no longer exists
STEP: Deleting pod pod-subpath-test-configmap-k2kz
Oct 19 19:46:16.051: INFO: Deleting pod "pod-subpath-test-configmap-k2kz" in namespace "subpath-238"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":132,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:16.385: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 19 19:46:14.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173" in namespace "projected-1933" to be "Succeeded or Failed"
Oct 19 19:46:14.294: INFO: Pod "downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173": Phase="Pending", Reason="", readiness=false. Elapsed: 106.168026ms
Oct 19 19:46:16.403: INFO: Pod "downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21522626s
STEP: Saw pod success
Oct 19 19:46:16.403: INFO: Pod "downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173" satisfied condition "Succeeded or Failed"
Oct 19 19:46:16.509: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173 container client-container: <nil>
STEP: delete the pod
Oct 19 19:46:16.727: INFO: Waiting for pod downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173 to disappear
Oct 19 19:46:16.832: INFO: Pod downwardapi-volume-7ee3a1ed-9b2f-4e11-9e8e-982784bc6173 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:16.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1933" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":284,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:17.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-2335" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":42,"skipped":287,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:19.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3420" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":43,"skipped":293,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:19.387: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct 19 19:46:16.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Oct 19 19:46:16.879: INFO: Waiting up to 5m0s for pod "security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9" in namespace "security-context-4644" to be "Succeeded or Failed"
Oct 19 19:46:16.985: INFO: Pod "security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9": Phase="Pending", Reason="", readiness=false. Elapsed: 105.864217ms
Oct 19 19:46:19.092: INFO: Pod "security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213489399s
STEP: Saw pod success
Oct 19 19:46:19.093: INFO: Pod "security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9" satisfied condition "Succeeded or Failed"
Oct 19 19:46:19.200: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9 container test-container: <nil>
STEP: delete the pod
Oct 19 19:46:19.419: INFO: Waiting for pod security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9 to disappear
Oct 19 19:46:19.525: INFO: Pod security-context-f95517c8-89c3-4cfb-85ea-79d7ef5e9be9 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:19.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-4644" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":25,"skipped":123,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:19.752: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-15e7cf54-a490-4b3b-bf75-30bc8a087aad
STEP: Creating a pod to test consume secrets
Oct 19 19:46:17.149: INFO: Waiting up to 5m0s for pod "pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63" in namespace "secrets-3597" to be "Succeeded or Failed"
Oct 19 19:46:17.255: INFO: Pod "pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63": Phase="Pending", Reason="", readiness=false. Elapsed: 106.005906ms
Oct 19 19:46:19.362: INFO: Pod "pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212979368s
STEP: Saw pod success
Oct 19 19:46:19.362: INFO: Pod "pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63" satisfied condition "Succeeded or Failed"
Oct 19 19:46:19.468: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63 container secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:46:19.686: INFO: Waiting for pod pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63 to disappear
Oct 19 19:46:19.792: INFO: Pod pod-secrets-16df5362-857c-44c3-b1e8-1fbcc86a8a63 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:19.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3597" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":134,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:21.386 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":24,"skipped":210,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 54 lines ...
Oct 19 19:45:29.968: INFO: PersistentVolumeClaim csi-hostpathtsfqc found but phase is Pending instead of Bound.
Oct 19 19:45:32.075: INFO: PersistentVolumeClaim csi-hostpathtsfqc found but phase is Pending instead of Bound.
Oct 19 19:45:34.182: INFO: PersistentVolumeClaim csi-hostpathtsfqc found but phase is Pending instead of Bound.
Oct 19 19:45:36.300: INFO: PersistentVolumeClaim csi-hostpathtsfqc found and phase=Bound (6.438207993s)
STEP: Creating pod pod-subpath-test-dynamicpv-dbdg
STEP: Creating a pod to test subpath
Oct 19 19:45:36.640: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dbdg" in namespace "provisioning-4789" to be "Succeeded or Failed"
Oct 19 19:45:36.747: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 107.207968ms
Oct 19 19:45:38.854: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214199992s
Oct 19 19:45:40.973: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332835866s
Oct 19 19:45:43.081: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440866777s
Oct 19 19:45:45.188: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548379648s
Oct 19 19:45:47.296: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656490332s
Oct 19 19:45:49.404: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.764477448s
Oct 19 19:45:51.514: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.874375802s
Oct 19 19:45:53.625: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.985528466s
Oct 19 19:45:55.733: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Pending", Reason="", readiness=false. Elapsed: 19.093483011s
Oct 19 19:45:57.841: INFO: Pod "pod-subpath-test-dynamicpv-dbdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.201076895s
STEP: Saw pod success
Oct 19 19:45:57.841: INFO: Pod "pod-subpath-test-dynamicpv-dbdg" satisfied condition "Succeeded or Failed"
Oct 19 19:45:57.948: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-dbdg container test-container-subpath-dynamicpv-dbdg: <nil>
STEP: delete the pod
Oct 19 19:45:58.171: INFO: Waiting for pod pod-subpath-test-dynamicpv-dbdg to disappear
Oct 19 19:45:58.278: INFO: Pod pod-subpath-test-dynamicpv-dbdg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dbdg
Oct 19 19:45:58.278: INFO: Deleting pod "pod-subpath-test-dynamicpv-dbdg" in namespace "provisioning-4789"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":22,"skipped":174,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:45:44.953: INFO: >>> kubeConfig: /root/.kube/config
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":23,"skipped":174,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:21.094: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Oct 19 19:46:11.318: INFO: start=2021-10-19 19:46:06.204723634 +0000 UTC m=+1393.401195117, now=2021-10-19 19:46:11.318345624 +0000 UTC m=+1398.514817136, kubelet pod: {"metadata":{"name":"pod-submit-remove-30233c9d-d3ae-41c6-badc-af024a01b761","namespace":"pods-5577","uid":"782917aa-b167-4fbb-b320-c7e80be2132d","resourceVersion":"41147","creationTimestamp":"2021-10-19T19:46:03Z","deletionTimestamp":"2021-10-19T19:46:36Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"451786627"},"annotations":{"kubernetes.io/config.seen":"2021-10-19T19:46:03.637104198Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-19T19:46:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bg68h","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bg68h","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-52-34.eu-west-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:03Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:07Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:07Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:03Z"}],"hostIP":"172.20.52.34","podIP":"100.96.2.115","podIPs":[{"ip":"100.96.2.115"}],"startTime":"2021-10-19T19:46:03Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-19T19:46:04Z","finishedAt":"2021-10-19T19:46:06Z","containerID":"containerd://61f41b4709b2e71443f8c8062ecaeca54ffcb335d6e715e94bd6bb4d4d7c7e41"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://61f41b4709b2e71443f8c8062ecaeca54ffcb335d6e715e94bd6bb4d4d7c7e41","started":false}],"qosClass":"BestEffort"}}
Oct 19 19:46:16.321: INFO: start=2021-10-19 19:46:06.204723634 +0000 UTC m=+1393.401195117, now=2021-10-19 19:46:16.321578067 +0000 UTC m=+1403.518049560, kubelet pod: {"metadata":{"name":"pod-submit-remove-30233c9d-d3ae-41c6-badc-af024a01b761","namespace":"pods-5577","uid":"782917aa-b167-4fbb-b320-c7e80be2132d","resourceVersion":"41147","creationTimestamp":"2021-10-19T19:46:03Z","deletionTimestamp":"2021-10-19T19:46:36Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"451786627"},"annotations":{"kubernetes.io/config.seen":"2021-10-19T19:46:03.637104198Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-19T19:46:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-bg68h","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-bg68h","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-52-34.eu-west-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:03Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:07Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:07Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-19T19:46:03Z"}],"hostIP":"172.20.52.34","podIP":"100.96.2.115","podIPs":[{"ip":"100.96.2.115"}],"startTime":"2021-10-19T19:46:03Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-19T19:46:04Z","finishedAt":"2021-10-19T19:46:06Z","containerID":"containerd://61f41b4709b2e71443f8c8062ecaeca54ffcb335d6e715e94bd6bb4d4d7c7e41"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://61f41b4709b2e71443f8c8062ecaeca54ffcb335d6e715e94bd6bb4d4d7c7e41","started":false}],"qosClass":"BestEffort"}}
Oct 19 19:46:21.318: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:21.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5577" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":19,"skipped":200,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:21.694: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":29,"skipped":202,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:20.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
STEP: Destroying namespace "apply-4178" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":30,"skipped":202,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":38,"skipped":251,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:25.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 19 19:46:26.609: INFO: Waiting up to 5m0s for pod "downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970" in namespace "downward-api-4395" to be "Succeeded or Failed"
Oct 19 19:46:26.723: INFO: Pod "downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970": Phase="Pending", Reason="", readiness=false. Elapsed: 114.653217ms
Oct 19 19:46:28.829: INFO: Pod "downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970": Phase="Running", Reason="", readiness=true. Elapsed: 2.220223591s
Oct 19 19:46:30.935: INFO: Pod "downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326501895s
STEP: Saw pod success
Oct 19 19:46:30.935: INFO: Pod "downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970" satisfied condition "Succeeded or Failed"
Oct 19 19:46:31.041: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970 container dapi-container: <nil>
STEP: delete the pod
Oct 19 19:46:31.257: INFO: Waiting for pod downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970 to disappear
Oct 19 19:46:31.361: INFO: Pod downward-api-7c57861b-cacf-4780-ba13-e8f4bd029970 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.610 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":251,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":18,"skipped":133,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:33.480: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Oct 19 19:46:15.835: INFO: PersistentVolumeClaim pvc-n8wdt found but phase is Pending instead of Bound.
Oct 19 19:46:17.941: INFO: PersistentVolumeClaim pvc-n8wdt found and phase=Bound (4.317319874s)
Oct 19 19:46:17.941: INFO: Waiting up to 3m0s for PersistentVolume local-r9qs5 to have phase Bound
Oct 19 19:46:18.056: INFO: PersistentVolume local-r9qs5 found and phase=Bound (115.742377ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9nfc
STEP: Creating a pod to test subpath
Oct 19 19:46:18.378: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9nfc" in namespace "provisioning-9868" to be "Succeeded or Failed"
Oct 19 19:46:18.484: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Pending", Reason="", readiness=false. Elapsed: 105.624676ms
Oct 19 19:46:20.591: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213286887s
Oct 19 19:46:22.698: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319740724s
Oct 19 19:46:24.807: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429409567s
STEP: Saw pod success
Oct 19 19:46:24.807: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc" satisfied condition "Succeeded or Failed"
Oct 19 19:46:24.913: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9nfc container test-container-subpath-preprovisionedpv-9nfc: <nil>
STEP: delete the pod
Oct 19 19:46:25.133: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9nfc to disappear
Oct 19 19:46:25.239: INFO: Pod pod-subpath-test-preprovisionedpv-9nfc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9nfc
Oct 19 19:46:25.239: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9nfc" in namespace "provisioning-9868"
STEP: Creating pod pod-subpath-test-preprovisionedpv-9nfc
STEP: Creating a pod to test subpath
Oct 19 19:46:25.453: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9nfc" in namespace "provisioning-9868" to be "Succeeded or Failed"
Oct 19 19:46:25.559: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Pending", Reason="", readiness=false. Elapsed: 105.667118ms
Oct 19 19:46:27.666: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Running", Reason="", readiness=true. Elapsed: 2.212632931s
Oct 19 19:46:29.816: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.363076094s
STEP: Saw pod success
Oct 19 19:46:29.816: INFO: Pod "pod-subpath-test-preprovisionedpv-9nfc" satisfied condition "Succeeded or Failed"
Oct 19 19:46:29.922: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-9nfc container test-container-subpath-preprovisionedpv-9nfc: <nil>
STEP: delete the pod
Oct 19 19:46:30.151: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9nfc to disappear
Oct 19 19:46:30.257: INFO: Pod pod-subpath-test-preprovisionedpv-9nfc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9nfc
Oct 19 19:46:30.257: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9nfc" in namespace "provisioning-9868"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":30,"skipped":187,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:34.036: INFO: Only supported for providers [azure] (not aws)
... skipping 211 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1540" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":253,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:34.907: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:34.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-4807" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":19,"skipped":144,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Oct 19 19:46:35.662: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.743 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 109 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":20,"skipped":147,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:40.085: INFO: Only supported for providers [vsphere] (not aws)
... skipping 150 lines ...
STEP: Listing all of the created validation webhooks
Oct 19 19:45:48.946: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:45:59.276: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:46:09.573: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:46:19.874: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:46:30.099: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:46:30.100: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "webhook-3354".
STEP: Found 8 events.
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:27 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-78988fc6cd to 1
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:27 +0000 UTC - event for sample-webhook-deployment-78988fc6cd: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-78988fc6cd-5zpf5
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:27 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {default-scheduler } Scheduled: Successfully assigned webhook-3354/sample-webhook-deployment-78988fc6cd-5zpf5 to ip-172-20-43-129.eu-west-1.compute.internal
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:28 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-fclgn" : failed to sync configmap cache: timed out waiting for the condition
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:28 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:30 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:30 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Created: Created container sample-webhook
Oct 19 19:46:30.207: INFO: At 2021-10-19 19:45:30 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-5zpf5: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Started: Started container sample-webhook
Oct 19 19:46:30.312: INFO: POD                                         NODE                                         PHASE    GRACE  CONDITIONS
Oct 19 19:46:30.312: INFO: sample-webhook-deployment-78988fc6cd-5zpf5  ip-172-20-43-129.eu-west-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:45:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:45:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:45:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:45:27 +0000 UTC  }]
Oct 19 19:46:30.313: INFO: 
... skipping 380 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:46:30.100: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606
------------------------------
SS
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":38,"skipped":268,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:40.299: INFO: Only supported for providers [openstack] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 19 19:46:41.041: INFO: Waiting up to 5m0s for pod "pod-6f30935c-b37b-4c57-9412-34363d69f778" in namespace "emptydir-7322" to be "Succeeded or Failed"
Oct 19 19:46:41.147: INFO: Pod "pod-6f30935c-b37b-4c57-9412-34363d69f778": Phase="Pending", Reason="", readiness=false. Elapsed: 106.636186ms
Oct 19 19:46:43.255: INFO: Pod "pod-6f30935c-b37b-4c57-9412-34363d69f778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214168393s
STEP: Saw pod success
Oct 19 19:46:43.255: INFO: Pod "pod-6f30935c-b37b-4c57-9412-34363d69f778" satisfied condition "Succeeded or Failed"
Oct 19 19:46:43.362: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-6f30935c-b37b-4c57-9412-34363d69f778 container test-container: <nil>
STEP: delete the pod
Oct 19 19:46:43.589: INFO: Waiting for pod pod-6f30935c-b37b-4c57-9412-34363d69f778 to disappear
Oct 19 19:46:43.694: INFO: Pod pod-6f30935c-b37b-4c57-9412-34363d69f778 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7322" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":21,"skipped":182,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:43.924: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 37 lines ...
• [SLOW TEST:61.983 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":45,"skipped":323,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:49.625: INFO: Only supported for providers [openstack] (not aws)
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":22,"skipped":137,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:54.310: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 262 lines ...
• [SLOW TEST:19.707 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":41,"skipped":274,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:55.547: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 17 lines ...
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:55.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-007a11f4-2ffb-4b86-a7c8-a2f2c7bf67e1
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:46:55.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7275" for this suite.
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":39,"skipped":269,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:46:57.401: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 187 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":26,"skipped":124,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:00.536: INFO: Driver local doesn't support ext3 -- skipping
... skipping 14 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":31,"skipped":203,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:24.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 95 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":32,"skipped":203,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:02.893: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
Oct 19 19:46:21.899: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-59416sr97
STEP: creating a claim
Oct 19 19:46:22.007: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-d46f
STEP: Creating a pod to test exec-volume-test
Oct 19 19:46:22.329: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-d46f" in namespace "volume-5941" to be "Succeeded or Failed"
Oct 19 19:46:22.436: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 106.356666ms
Oct 19 19:46:24.545: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215263781s
Oct 19 19:46:26.655: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325875314s
Oct 19 19:46:28.765: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435054549s
Oct 19 19:46:30.872: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542224423s
Oct 19 19:46:32.978: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648568346s
Oct 19 19:46:35.085: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.755736931s
Oct 19 19:46:37.193: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.863628205s
Oct 19 19:46:39.301: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.971617836s
Oct 19 19:46:41.410: INFO: Pod "exec-volume-test-dynamicpv-d46f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.080462192s
STEP: Saw pod success
Oct 19 19:46:41.410: INFO: Pod "exec-volume-test-dynamicpv-d46f" satisfied condition "Succeeded or Failed"
Oct 19 19:46:41.516: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod exec-volume-test-dynamicpv-d46f container exec-container-dynamicpv-d46f: <nil>
STEP: delete the pod
Oct 19 19:46:41.736: INFO: Waiting for pod exec-volume-test-dynamicpv-d46f to disappear
Oct 19 19:46:41.847: INFO: Pod exec-volume-test-dynamicpv-d46f no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-d46f
Oct 19 19:46:41.847: INFO: Deleting pod "exec-volume-test-dynamicpv-d46f" in namespace "volume-5941"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":24,"skipped":211,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:03.175: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Oct 19 19:47:00.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 19 19:47:01.230: INFO: Waiting up to 5m0s for pod "pod-354e9352-5784-4f5b-8a26-dbb233279945" in namespace "emptydir-1149" to be "Succeeded or Failed"
Oct 19 19:47:01.336: INFO: Pod "pod-354e9352-5784-4f5b-8a26-dbb233279945": Phase="Pending", Reason="", readiness=false. Elapsed: 106.072077ms
Oct 19 19:47:03.445: INFO: Pod "pod-354e9352-5784-4f5b-8a26-dbb233279945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.215425229s
STEP: Saw pod success
Oct 19 19:47:03.445: INFO: Pod "pod-354e9352-5784-4f5b-8a26-dbb233279945" satisfied condition "Succeeded or Failed"
Oct 19 19:47:03.551: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-354e9352-5784-4f5b-8a26-dbb233279945 container test-container: <nil>
STEP: delete the pod
Oct 19 19:47:03.773: INFO: Waiting for pod pod-354e9352-5784-4f5b-8a26-dbb233279945 to disappear
Oct 19 19:47:03.879: INFO: Pod pod-354e9352-5784-4f5b-8a26-dbb233279945 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:03.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1149" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":131,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:04.109: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:04.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3070" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":33,"skipped":214,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

SSSSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":23,"skipped":167,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:46:56.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:8.499 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":24,"skipped":167,"failed":3,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:04.659: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 321 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:06.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-727" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":25,"skipped":216,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:06.696: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Oct 19 19:47:00.598: INFO: PersistentVolumeClaim pvc-xgjf8 found but phase is Pending instead of Bound.
Oct 19 19:47:02.706: INFO: PersistentVolumeClaim pvc-xgjf8 found and phase=Bound (14.859261507s)
Oct 19 19:47:02.706: INFO: Waiting up to 3m0s for PersistentVolume local-44295 to have phase Bound
Oct 19 19:47:02.813: INFO: PersistentVolume local-44295 found and phase=Bound (106.546747ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-wnq8
STEP: Creating a pod to test exec-volume-test
Oct 19 19:47:03.139: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wnq8" in namespace "volume-5094" to be "Succeeded or Failed"
Oct 19 19:47:03.245: INFO: Pod "exec-volume-test-preprovisionedpv-wnq8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.149948ms
Oct 19 19:47:05.353: INFO: Pod "exec-volume-test-preprovisionedpv-wnq8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213543271s
STEP: Saw pod success
Oct 19 19:47:05.353: INFO: Pod "exec-volume-test-preprovisionedpv-wnq8" satisfied condition "Succeeded or Failed"
Oct 19 19:47:05.459: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod exec-volume-test-preprovisionedpv-wnq8 container exec-container-preprovisionedpv-wnq8: <nil>
STEP: delete the pod
Oct 19 19:47:05.679: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wnq8 to disappear
Oct 19 19:47:05.785: INFO: Pod exec-volume-test-preprovisionedpv-wnq8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-wnq8
Oct 19 19:47:05.785: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wnq8" in namespace "volume-5094"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":22,"skipped":184,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:245.627 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":187,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:09.358: INFO: Only supported for providers [azure] (not aws)
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:09.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1493" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":23,"skipped":185,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:10.149: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
Oct 19 19:47:06.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct 19 19:47:07.420: INFO: Waiting up to 5m0s for pod "security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945" in namespace "security-context-1977" to be "Succeeded or Failed"
Oct 19 19:47:07.527: INFO: Pod "security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945": Phase="Pending", Reason="", readiness=false. Elapsed: 106.515777ms
Oct 19 19:47:09.635: INFO: Pod "security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214960595s
STEP: Saw pod success
Oct 19 19:47:09.635: INFO: Pod "security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945" satisfied condition "Succeeded or Failed"
Oct 19 19:47:09.742: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945 container test-container: <nil>
STEP: delete the pod
Oct 19 19:47:09.960: INFO: Waiting for pod security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945 to disappear
Oct 19 19:47:10.066: INFO: Pod security-context-c1bf26fe-f9a7-4a3a-83e0-513048591945 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:10.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-1977" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":227,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:10.309: INFO: Only supported for providers [openstack] (not aws)
... skipping 66 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 19 19:47:10.866: INFO: Waiting up to 5m0s for pod "busybox-user-65534-427cdcc8-baec-4289-bb4a-1b413135d18d" in namespace "security-context-test-4992" to be "Succeeded or Failed"
Oct 19 19:47:10.972: INFO: Pod "busybox-user-65534-427cdcc8-baec-4289-bb4a-1b413135d18d": Phase="Pending", Reason="", readiness=false. Elapsed: 106.520248ms
Oct 19 19:47:13.080: INFO: Pod "busybox-user-65534-427cdcc8-baec-4289-bb4a-1b413135d18d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213935853s
Oct 19 19:47:13.080: INFO: Pod "busybox-user-65534-427cdcc8-baec-4289-bb4a-1b413135d18d" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:13.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4992" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":193,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:13.320: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":27,"skipped":235,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:22.486: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 112 lines ...
Oct 19 19:41:53.317: INFO: PersistentVolume nfs-cgvv7 found and phase=Bound (106.019308ms)
Oct 19 19:41:53.423: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8gntx] to have phase Bound
Oct 19 19:41:53.529: INFO: PersistentVolumeClaim pvc-8gntx found and phase=Bound (105.584958ms)
STEP: Checking pod has write access to PersistentVolumes
Oct 19 19:41:53.634: INFO: Creating nfs test pod
Oct 19 19:41:53.742: INFO: Pod should terminate with exitcode 0 (success)
Oct 19 19:41:53.742: INFO: Waiting up to 5m0s for pod "pvc-tester-wnxkj" in namespace "pv-7993" to be "Succeeded or Failed"
Oct 19 19:41:53.849: INFO: Pod "pvc-tester-wnxkj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.490877ms
Oct 19 19:41:55.955: INFO: Pod "pvc-tester-wnxkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212997566s
STEP: Saw pod success
Oct 19 19:41:55.955: INFO: Pod "pvc-tester-wnxkj" satisfied condition "Succeeded or Failed"
Oct 19 19:41:55.955: INFO: Pod pvc-tester-wnxkj succeeded 
Oct 19 19:41:55.955: INFO: Deleting pod "pvc-tester-wnxkj" in namespace "pv-7993"
Oct 19 19:41:56.066: INFO: Wait up to 5m0s for pod "pvc-tester-wnxkj" to be fully deleted
Oct 19 19:41:56.277: INFO: Creating nfs test pod
Oct 19 19:41:56.383: INFO: Pod should terminate with exitcode 0 (success)
Oct 19 19:41:56.383: INFO: Waiting up to 5m0s for pod "pvc-tester-q7k47" in namespace "pv-7993" to be "Succeeded or Failed"
Oct 19 19:41:56.489: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 105.900467ms
Oct 19 19:41:58.598: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214128734s
Oct 19 19:42:00.706: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322127633s
Oct 19 19:42:02.812: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428718161s
Oct 19 19:42:04.919: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535231932s
Oct 19 19:42:07.026: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 10.64252675s
... skipping 133 lines ...
Oct 19 19:46:49.519: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.135531924s
Oct 19 19:46:51.626: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.242310436s
Oct 19 19:46:53.732: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.348801687s
Oct 19 19:46:55.839: INFO: Pod "pvc-tester-q7k47": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.45545555s
Oct 19 19:46:57.840: INFO: Deleting pod "pvc-tester-q7k47" in namespace "pv-7993"
Oct 19 19:46:57.948: INFO: Wait up to 5m0s for pod "pvc-tester-q7k47" to be fully deleted
Oct 19 19:47:08.162: FAIL: Unexpected error:
    <*errors.errorString | 0xc0028f11d0>: {
        s: "pod \"pvc-tester-q7k47\" did not exit with Success: pod \"pvc-tester-q7k47\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-q7k47\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-q7k47" did not exit with Success: pod "pvc-tester-q7k47" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-q7k47" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0030e5680)
... skipping 30 lines ...
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:41:53 +0000 UTC - event for pvc-tester-wnxkj: {default-scheduler } Scheduled: Successfully assigned pv-7993/pvc-tester-wnxkj to ip-172-20-43-129.eu-west-1.compute.internal
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:41:55 +0000 UTC - event for pvc-tester-wnxkj: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:41:55 +0000 UTC - event for pvc-tester-wnxkj: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Created: Created container write-pod
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:41:55 +0000 UTC - event for pvc-tester-wnxkj: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Started: Started container write-pod
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:41:56 +0000 UTC - event for pvc-tester-q7k47: {default-scheduler } Scheduled: Successfully assigned pv-7993/pvc-tester-q7k47 to ip-172-20-52-34.eu-west-1.compute.internal
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:43:59 +0000 UTC - event for pvc-tester-q7k47: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-lkvsk]: timed out waiting for the condition
Oct 19 19:47:19.233: INFO: At 2021-10-19 19:44:58 +0000 UTC - event for pvc-tester-q7k47: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-cgvv7" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.3.62:/exports /var/lib/kubelet/pods/32dba782-8f36-4e3a-a132-9d06804c3629/volumes/kubernetes.io~nfs/nfs-cgvv7
Output: mount.nfs: Connection timed out

Oct 19 19:47:19.233: INFO: At 2021-10-19 19:47:08 +0000 UTC - event for nfs-server: {kubelet ip-172-20-43-129.eu-west-1.compute.internal} Killing: Stopping container nfs-server
Oct 19 19:47:19.339: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 198 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233

      Oct 19 19:47:08.162: Unexpected error:
          <*errors.errorString | 0xc0028f11d0>: {
              s: "pod \"pvc-tester-q7k47\" did not exit with Success: pod \"pvc-tester-q7k47\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-q7k47\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-q7k47" did not exit with Success: pod "pvc-tester-q7k47" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-q7k47" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":16,"skipped":105,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:32.324 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":42,"skipped":277,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:27.921: INFO: Only supported for providers [gce gke] (not aws)
... skipping 74 lines ...
STEP: Creating a kubernetes client
Oct 19 19:47:22.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Oct 19 19:47:23.368: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct 19 19:47:29.583: INFO: deleting claim "volume-provisioning-3667"/"pvc-plzc9"
... skipping 6 lines ...

• [SLOW TEST:7.392 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":28,"skipped":254,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:30.046: INFO: Only supported for providers [openstack] (not aws)
... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":34,"skipped":219,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:38.470: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 136 lines ...
Oct 19 19:47:30.901: INFO: PersistentVolumeClaim pvc-vvbnv found but phase is Pending instead of Bound.
Oct 19 19:47:33.009: INFO: PersistentVolumeClaim pvc-vvbnv found and phase=Bound (14.865320453s)
Oct 19 19:47:33.009: INFO: Waiting up to 3m0s for PersistentVolume local-qmbvs to have phase Bound
Oct 19 19:47:33.115: INFO: PersistentVolume local-qmbvs found and phase=Bound (106.282627ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lkm2
STEP: Creating a pod to test subpath
Oct 19 19:47:33.435: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lkm2" in namespace "provisioning-2429" to be "Succeeded or Failed"
Oct 19 19:47:33.542: INFO: Pod "pod-subpath-test-preprovisionedpv-lkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 106.406197ms
Oct 19 19:47:35.649: INFO: Pod "pod-subpath-test-preprovisionedpv-lkm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213963544s
Oct 19 19:47:37.757: INFO: Pod "pod-subpath-test-preprovisionedpv-lkm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321834337s
STEP: Saw pod success
Oct 19 19:47:37.757: INFO: Pod "pod-subpath-test-preprovisionedpv-lkm2" satisfied condition "Succeeded or Failed"
Oct 19 19:47:37.864: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-lkm2 container test-container-subpath-preprovisionedpv-lkm2: <nil>
STEP: delete the pod
Oct 19 19:47:38.093: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lkm2 to disappear
Oct 19 19:47:38.200: INFO: Pod pod-subpath-test-preprovisionedpv-lkm2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lkm2
Oct 19 19:47:38.201: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lkm2" in namespace "provisioning-2429"
... skipping 49 lines ...
STEP: Registering the crd webhook via the AdmissionRegistration API
Oct 19 19:46:49.582: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:46:59.899: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:47:10.197: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:47:20.495: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:47:30.709: INFO: Waiting for webhook configuration to be ready...
Oct 19 19:47:30.709: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0001c4250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 398 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:47:30.709: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0001c4250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":30,"skipped":214,"failed":3,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:41.362: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":20,"skipped":214,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:41.756: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 71 lines ...
• [SLOW TEST:14.985 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":43,"skipped":280,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:42.957: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
• [SLOW TEST:5.775 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":35,"skipped":241,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:44.433: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 25 lines ...
STEP: Destroying namespace "services-3010" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":44,"skipped":285,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:45.170: INFO: Only supported for providers [gce gke] (not aws)
... skipping 107 lines ...
Oct 19 19:47:41.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 19 19:47:42.420: INFO: Waiting up to 5m0s for pod "pod-54406a76-fd8c-47d6-b47e-c537798e18ab" in namespace "emptydir-6549" to be "Succeeded or Failed"
Oct 19 19:47:42.526: INFO: Pod "pod-54406a76-fd8c-47d6-b47e-c537798e18ab": Phase="Pending", Reason="", readiness=false. Elapsed: 105.847077ms
Oct 19 19:47:44.633: INFO: Pod "pod-54406a76-fd8c-47d6-b47e-c537798e18ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213091819s
STEP: Saw pod success
Oct 19 19:47:44.633: INFO: Pod "pod-54406a76-fd8c-47d6-b47e-c537798e18ab" satisfied condition "Succeeded or Failed"
Oct 19 19:47:44.739: INFO: Trying to get logs from node ip-172-20-43-129.eu-west-1.compute.internal pod pod-54406a76-fd8c-47d6-b47e-c537798e18ab container test-container: <nil>
STEP: delete the pod
Oct 19 19:47:44.957: INFO: Waiting for pod pod-54406a76-fd8c-47d6-b47e-c537798e18ab to disappear
Oct 19 19:47:45.062: INFO: Pod pod-54406a76-fd8c-47d6-b47e-c537798e18ab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":217,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:45.300: INFO: Only supported for providers [azure] (not aws)
... skipping 125 lines ...
STEP: Destroying namespace "services-9512" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":36,"skipped":245,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:45.813: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":25,"skipped":212,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":46,"skipped":330,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:48.644: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:49.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7425" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":47,"skipped":331,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:50.100: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 337 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  34s   default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     33s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    33s   kubelet            Created container webserver
  Normal  Started    33s   kubelet            Started container webserver

Oct 19 19:29:09.648: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.1.120&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Oct 19 19:29:09.648: INFO: ...failed...will try again in next pass
Oct 19 19:29:09.648: INFO: Breadth first check of 100.96.3.131 on host 172.20.43.129...
Oct 19 19:29:09.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.3.131&port=8080&tries=1'] Namespace:pod-network-test-6557 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 19 19:29:09.754: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:29:15.481: INFO: Waiting for responses: map[netserver-1:{}]
Oct 19 19:29:17.482: INFO: 
Output of kubectl describe pod pod-network-test-6557/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     43s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    43s   kubelet            Created container webserver
  Normal  Started    43s   kubelet            Started container webserver

Oct 19 19:29:19.987: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.3.131&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Oct 19 19:29:19.987: INFO: ...failed...will try again in next pass
Oct 19 19:29:19.987: INFO: Breadth first check of 100.96.2.122 on host 172.20.52.34...
Oct 19 19:29:20.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.2.122&port=8080&tries=1'] Namespace:pod-network-test-6557 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 19 19:29:20.094: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:29:25.825: INFO: Waiting for responses: map[netserver-2:{}]
Oct 19 19:29:27.826: INFO: 
Output of kubectl describe pod pod-network-test-6557/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  55s   default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     54s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    54s   kubelet            Created container webserver
  Normal  Started    54s   kubelet            Started container webserver

Oct 19 19:29:30.394: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.2.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 19 19:29:30.394: INFO: ...failed...will try again in next pass
Oct 19 19:29:30.394: INFO: Breadth first check of 100.96.4.97 on host 172.20.55.71...
Oct 19 19:29:30.501: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.4.97&port=8080&tries=1'] Namespace:pod-network-test-6557 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 19 19:29:30.501: INFO: >>> kubeConfig: /root/.kube/config
Oct 19 19:29:31.251: INFO: Waiting for responses: map[]
Oct 19 19:29:31.251: INFO: reached 100.96.4.97 after 0/1 tries
Oct 19 19:29:31.251: INFO: Going to retry 3 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  7m1s  default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     7m    kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    7m    kubelet            Created container webserver
  Normal  Started    7m    kubelet            Started container webserver

Oct 19 19:35:36.032: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.1.120&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Oct 19 19:35:36.032: INFO: ... Done probing pod [[[ 100.96.1.120 ]]]
Oct 19 19:35:36.032: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    13m   kubelet            Created container webserver
  Normal  Started    13m   kubelet            Started container webserver

Oct 19 19:41:40.159: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.3.131&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Oct 19 19:41:40.159: INFO: ... Done probing pod [[[ 100.96.3.131 ]]]
Oct 19 19:41:40.159: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned pod-network-test-6557/netserver-3 to ip-172-20-55-71.eu-west-1.compute.internal
  Normal  Pulled     19m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    19m   kubelet            Created container webserver
  Normal  Started    19m   kubelet            Started container webserver

Oct 19 19:47:46.142: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.2.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 19 19:47:46.142: INFO: ... Done probing pod [[[ 100.96.2.122 ]]]
Oct 19 19:47:46.142: INFO: succeeded at polling 1 out of 4 connections
Oct 19 19:47:46.142: INFO: pod polling failure summary:
Oct 19 19:47:46.142: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.1.120&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Oct 19 19:47:46.142: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.3.131&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Oct 19 19:47:46.142: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.103:9080/dial?request=hostname&protocol=http&host=100.96.2.122&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Oct 19 19:47:46.142: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ff8780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 229 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 19 19:47:46.142: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":104,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:50.700: INFO: Only supported for providers [gce gke] (not aws)
... skipping 138 lines ...
• [SLOW TEST:21.177 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":262,"failed":2,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:51.308: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-331c3cda-0855-4dc6-ae52-336607092eea
STEP: Creating a pod to test consume secrets
Oct 19 19:47:49.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e" in namespace "projected-3853" to be "Succeeded or Failed"
Oct 19 19:47:49.463: INFO: Pod "pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e": Phase="Pending", Reason="", readiness=false. Elapsed: 106.590967ms
Oct 19 19:47:51.569: INFO: Pod "pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.213020062s
STEP: Saw pod success
Oct 19 19:47:51.569: INFO: Pod "pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e" satisfied condition "Succeeded or Failed"
Oct 19 19:47:51.675: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 19 19:47:51.892: INFO: Waiting for pod pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e to disappear
Oct 19 19:47:52.000: INFO: Pod pod-projected-secrets-e1447668-0566-49e1-bcbc-69427bcbdf6e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:52.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3853" for this suite.
STEP: Destroying namespace "secret-namespace-2237" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":26,"skipped":213,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:52.338: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":31,"skipped":217,"failed":3,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 19 19:47:54.225: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 31 lines ...
Oct 19 19:47:52.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 19 19:47:53.027: INFO: Waiting up to 5m0s for pod "pod-8c168b28-c057-4cb4-998d-11bf7e5cf031" in namespace "emptydir-9406" to be "Succeeded or Failed"
Oct 19 19:47:53.133: INFO: Pod "pod-8c168b28-c057-4cb4-998d-11bf7e5cf031": Phase="Pending", Reason="", readiness=false. Elapsed: 106.062487ms
Oct 19 19:47:55.240: INFO: Pod "pod-8c168b28-c057-4cb4-998d-11bf7e5cf031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21288698s
STEP: Saw pod success
Oct 19 19:47:55.240: INFO: Pod "pod-8c168b28-c057-4cb4-998d-11bf7e5cf031" satisfied condition "Succeeded or Failed"
Oct 19 19:47:55.346: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-8c168b28-c057-4cb4-998d-11bf7e5cf031 container test-container: <nil>
STEP: delete the pod
Oct 19 19:47:55.566: INFO: Waiting for pod pod-8c168b28-c057-4cb4-998d-11bf7e5cf031 to disappear
Oct 19 19:47:55.675: INFO: Pod pod-8c168b28-c057-4cb4-998d-11bf7e5cf031 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:47:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9406" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":218,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Oct 19 19:47:55.898: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":45,"skipped":333,"failed":3,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
Oct 19 19:47:57.875: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:62.393 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":40,"skipped":283,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
Oct 19 19:47:59.918: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":39,"skipped":287,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:42:30.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
STEP: Delete the cronjob
W1019 19:43:00.768920    5370 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
STEP: Verify if cronjob does not leave jobs nor pods behind
W1019 19:43:00.875255    5370 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
STEP: Gathering metrics
W1019 19:43:01.196188    5370 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 19 19:48:01.409: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:48:01.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1420" for this suite.


• [SLOW TEST:330.930 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1160
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":40,"skipped":287,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
Oct 19 19:48:01.635: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:117.285 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":20,"skipped":101,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
Oct 19 19:48:01.871: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Oct 19 19:48:02.006: INFO: PersistentVolumeClaim pvc-n9jlk found but phase is Pending instead of Bound.
Oct 19 19:48:04.113: INFO: PersistentVolumeClaim pvc-n9jlk found and phase=Bound (14.853311679s)
Oct 19 19:48:04.113: INFO: Waiting up to 3m0s for PersistentVolume local-f2g9r to have phase Bound
Oct 19 19:48:04.219: INFO: PersistentVolume local-f2g9r found and phase=Bound (106.153707ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vghk
STEP: Creating a pod to test subpath
Oct 19 19:48:04.538: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vghk" in namespace "provisioning-5299" to be "Succeeded or Failed"
Oct 19 19:48:04.644: INFO: Pod "pod-subpath-test-preprovisionedpv-vghk": Phase="Pending", Reason="", readiness=false. Elapsed: 105.768077ms
Oct 19 19:48:06.750: INFO: Pod "pod-subpath-test-preprovisionedpv-vghk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212443455s
Oct 19 19:48:08.857: INFO: Pod "pod-subpath-test-preprovisionedpv-vghk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319002679s
STEP: Saw pod success
Oct 19 19:48:08.857: INFO: Pod "pod-subpath-test-preprovisionedpv-vghk" satisfied condition "Succeeded or Failed"
Oct 19 19:48:08.963: INFO: Trying to get logs from node ip-172-20-35-5.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vghk container test-container-volume-preprovisionedpv-vghk: <nil>
STEP: delete the pod
Oct 19 19:48:09.183: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vghk to disappear
Oct 19 19:48:09.289: INFO: Pod pod-subpath-test-preprovisionedpv-vghk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vghk
Oct 19 19:48:09.289: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vghk" in namespace "provisioning-5299"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":22,"skipped":218,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Oct 19 19:48:11.513: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct 19 19:48:01.205: INFO: PersistentVolumeClaim pvc-lm2fp found but phase is Pending instead of Bound.
Oct 19 19:48:03.312: INFO: PersistentVolumeClaim pvc-lm2fp found and phase=Bound (8.532274756s)
Oct 19 19:48:03.312: INFO: Waiting up to 3m0s for PersistentVolume local-rzpdb to have phase Bound
Oct 19 19:48:03.418: INFO: PersistentVolume local-rzpdb found and phase=Bound (105.678077ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-l4ql
STEP: Creating a pod to test subpath
Oct 19 19:48:03.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l4ql" in namespace "provisioning-443" to be "Succeeded or Failed"
Oct 19 19:48:03.844: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql": Phase="Pending", Reason="", readiness=false. Elapsed: 105.834806ms
Oct 19 19:48:05.950: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212167543s
Oct 19 19:48:08.057: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31891275s
STEP: Saw pod success
Oct 19 19:48:08.057: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql" satisfied condition "Succeeded or Failed"
Oct 19 19:48:08.162: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-l4ql container test-container-subpath-preprovisionedpv-l4ql: <nil>
STEP: delete the pod
Oct 19 19:48:08.382: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l4ql to disappear
Oct 19 19:48:08.488: INFO: Pod pod-subpath-test-preprovisionedpv-l4ql no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l4ql
Oct 19 19:48:08.488: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l4ql" in namespace "provisioning-443"
STEP: Creating pod pod-subpath-test-preprovisionedpv-l4ql
STEP: Creating a pod to test subpath
Oct 19 19:48:08.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l4ql" in namespace "provisioning-443" to be "Succeeded or Failed"
Oct 19 19:48:08.806: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql": Phase="Pending", Reason="", readiness=false. Elapsed: 105.543097ms
Oct 19 19:48:10.912: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211749393s
STEP: Saw pod success
Oct 19 19:48:10.912: INFO: Pod "pod-subpath-test-preprovisionedpv-l4ql" satisfied condition "Succeeded or Failed"
Oct 19 19:48:11.018: INFO: Trying to get logs from node ip-172-20-52-34.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-l4ql container test-container-subpath-preprovisionedpv-l4ql: <nil>
STEP: delete the pod
Oct 19 19:48:11.237: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l4ql to disappear
Oct 19 19:48:11.363: INFO: Pod pod-subpath-test-preprovisionedpv-l4ql no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l4ql
Oct 19 19:48:11.363: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l4ql" in namespace "provisioning-443"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":127,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Oct 19 19:48:12.853: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":25,"skipped":201,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:47:41.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":26,"skipped":201,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should be able to up and down services"]}
Oct 19 19:48:31.465: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":44,"skipped":300,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
Oct 19 19:48:33.851: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct 19 19:47:24.444: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9203brpjl
STEP: creating a claim
Oct 19 19:47:24.552: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zvp2
STEP: Creating a pod to test subpath
Oct 19 19:47:24.877: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zvp2" in namespace "provisioning-9203" to be "Succeeded or Failed"
Oct 19 19:47:24.984: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 106.340428ms
Oct 19 19:47:27.090: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213077496s
Oct 19 19:47:29.211: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333890991s
Oct 19 19:47:31.318: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440486566s
Oct 19 19:47:33.425: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547486639s
Oct 19 19:47:35.532: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655026286s
... skipping 10 lines ...
Oct 19 19:47:58.731: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 33.854070867s
Oct 19 19:48:00.853: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.975431494s
Oct 19 19:48:02.961: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.084005711s
Oct 19 19:48:05.069: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 40.191315763s
Oct 19 19:48:07.176: INFO: Pod "pod-subpath-test-dynamicpv-zvp2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.298833201s
STEP: Saw pod success
Oct 19 19:48:07.176: INFO: Pod "pod-subpath-test-dynamicpv-zvp2" satisfied condition "Succeeded or Failed"
Oct 19 19:48:07.282: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-zvp2 container test-container-subpath-dynamicpv-zvp2: <nil>
STEP: delete the pod
Oct 19 19:48:07.500: INFO: Waiting for pod pod-subpath-test-dynamicpv-zvp2 to disappear
Oct 19 19:48:07.606: INFO: Pod pod-subpath-test-dynamicpv-zvp2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zvp2
Oct 19 19:48:07.606: INFO: Deleting pod "pod-subpath-test-dynamicpv-zvp2" in namespace "provisioning-9203"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":17,"skipped":109,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
Oct 19 19:48:34.114: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct 19 19:47:50.756: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-20824lrr4
STEP: creating a claim
Oct 19 19:47:50.864: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-l86q
STEP: Creating a pod to test subpath
Oct 19 19:47:51.192: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-l86q" in namespace "provisioning-2082" to be "Succeeded or Failed"
Oct 19 19:47:51.300: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 108.078197ms
Oct 19 19:47:53.409: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217612161s
Oct 19 19:47:55.518: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326322185s
Oct 19 19:47:57.628: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436288984s
Oct 19 19:47:59.738: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546128991s
Oct 19 19:48:01.847: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655172377s
Oct 19 19:48:03.957: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.765225491s
Oct 19 19:48:06.066: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 14.873979427s
Oct 19 19:48:08.175: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.983595943s
Oct 19 19:48:10.285: INFO: Pod "pod-subpath-test-dynamicpv-l86q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.093179258s
STEP: Saw pod success
Oct 19 19:48:10.285: INFO: Pod "pod-subpath-test-dynamicpv-l86q" satisfied condition "Succeeded or Failed"
Oct 19 19:48:10.393: INFO: Trying to get logs from node ip-172-20-55-71.eu-west-1.compute.internal pod pod-subpath-test-dynamicpv-l86q container test-container-subpath-dynamicpv-l86q: <nil>
STEP: delete the pod
Oct 19 19:48:10.617: INFO: Waiting for pod pod-subpath-test-dynamicpv-l86q to disappear
Oct 19 19:48:10.725: INFO: Pod pod-subpath-test-dynamicpv-l86q no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-l86q
Oct 19 19:48:10.725: INFO: Deleting pod "pod-subpath-test-dynamicpv-l86q" in namespace "provisioning-2082"
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":48,"skipped":349,"failed":3,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
Oct 19 19:48:52.587: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 19 19:47:45.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W1019 19:47:46.484371    5392 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-2266" for this suite.


• [SLOW TEST:81.620 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":37,"skipped":252,"failed":1,"failures":["[sig-network] Services should implement service.kubernetes.io/headless"]}
Oct 19 19:49:07.469: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 102 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":28,"skipped":154,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]"]}
Oct 19 19:49:50.409: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
Oct 19 19:44:34.379: INFO: PersistentVolume nfs-crlfx found and phase=Bound (104.579877ms)
Oct 19 19:44:34.484: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7x4bt] to have phase Bound
Oct 19 19:44:34.589: INFO: PersistentVolumeClaim pvc-7x4bt found and phase=Bound (104.791766ms)
STEP: Checking pod has write access to PersistentVolumes
Oct 19 19:44:34.694: INFO: Creating nfs test pod
Oct 19 19:44:34.800: INFO: Pod should terminate with exitcode 0 (success)
Oct 19 19:44:34.800: INFO: Waiting up to 5m0s for pod "pvc-tester-cnz6d" in namespace "pv-6915" to be "Succeeded or Failed"
Oct 19 19:44:34.909: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.962738ms
Oct 19 19:44:37.015: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214967569s
Oct 19 19:44:39.121: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320896268s
Oct 19 19:44:41.228: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427998579s
Oct 19 19:44:43.334: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533845653s
Oct 19 19:44:45.442: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642069125s
... skipping 133 lines ...
Oct 19 19:49:27.746: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.945871005s
Oct 19 19:49:29.853: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.052649345s
Oct 19 19:49:31.960: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.159639975s
Oct 19 19:49:34.066: INFO: Pod "pvc-tester-cnz6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.265628001s
Oct 19 19:49:36.066: INFO: Deleting pod "pvc-tester-cnz6d" in namespace "pv-6915"
Oct 19 19:49:36.172: INFO: Wait up to 5m0s for pod "pvc-tester-cnz6d" to be fully deleted
Oct 19 19:49:40.383: FAIL: Unexpected error:
    <*errors.errorString | 0xc001c013c0>: {
        s: "pod \"pvc-tester-cnz6d\" did not exit with Success: pod \"pvc-tester-cnz6d\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-cnz6d\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-cnz6d" did not exit with Success: pod "pvc-tester-cnz6d" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-cnz6d" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0024fa480)
... skipping 24 lines ...
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:44:30 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-6915/nfs-server to ip-172-20-52-34.eu-west-1.compute.internal
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:44:30 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:44:30 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Created: Created container nfs-server
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:44:30 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Started: Started container nfs-server
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:44:34 +0000 UTC - event for pvc-tester-cnz6d: {default-scheduler } Scheduled: Successfully assigned pv-6915/pvc-tester-cnz6d to ip-172-20-35-5.eu-west-1.compute.internal
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:46:37 +0000 UTC - event for pvc-tester-cnz6d: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-8gnm4]: timed out waiting for the condition
Oct 19 19:49:47.445: INFO: At 2021-10-19 19:47:37 +0000 UTC - event for pvc-tester-cnz6d: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-ptt6s" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.91:/exports /var/lib/kubelet/pods/b65c3e84-a16d-42e7-a0bc-9db72e5c7ce5/volumes/kubernetes.io~nfs/nfs-ptt6s
Output: mount.nfs: Connection timed out

Oct 19 19:49:47.445: INFO: At 2021-10-19 19:49:41 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Killing: Stopping container nfs-server
Oct 19 19:49:47.550: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243

      Oct 19 19:49:40.383: Unexpected error:
          <*errors.errorString | 0xc001c013c0>: {
              s: "pod \"pvc-tester-cnz6d\" did not exit with Success: pod \"pvc-tester-cnz6d\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-cnz6d\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-cnz6d" did not exit with Success: pod "pvc-tester-cnz6d" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-cnz6d" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":1,"skipped":24,"failed":2,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
Oct 19 19:49:52.053: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:454
STEP: create the rc
STEP: delete the rc
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1019 19:44:57.280452    5562 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 19 19:49:57.490: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Oct 19 19:49:57.490: INFO: Deleting pod "simpletest.rc-gfhnr" in namespace "gc-9965"
Oct 19 19:49:57.601: INFO: Deleting pod "simpletest.rc-jqw9x" in namespace "gc-9965"
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:49:57.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9965" for this suite.
... skipping 2 lines ...
• [SLOW TEST:336.748 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:454
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":35,"skipped":200,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
Oct 19 19:49:57.944: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1019 19:45:22.865590    5438 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 19 19:50:23.077: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:50:23.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2632" for this suite.


• [SLOW TEST:302.023 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":23,"skipped":157,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
Oct 19 19:50:23.301: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: creating replication controller affinity-nodeport in namespace services-5142
I1019 19:47:52.092981    5490 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5142, replica count: 3
I1019 19:47:55.243808    5490 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 19 19:47:55.565: INFO: Creating new exec pod
Oct 19 19:47:58.998: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:05.250: INFO: rc: 1
Oct 19 19:48:05.250: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:06.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:12.448: INFO: rc: 1
Oct 19 19:48:12.448: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:13.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:19.433: INFO: rc: 1
Oct 19 19:48:19.434: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:20.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:26.450: INFO: rc: 1
Oct 19 19:48:26.450: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:27.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:33.451: INFO: rc: 1
Oct 19 19:48:33.451: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:34.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:40.403: INFO: rc: 1
Oct 19 19:48:40.403: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:41.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:47.452: INFO: rc: 1
Oct 19 19:48:47.452: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:48.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:48:54.482: INFO: rc: 1
Oct 19 19:48:54.482: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:48:55.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:01.397: INFO: rc: 1
Oct 19 19:49:01.397: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:02.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:08.440: INFO: rc: 1
Oct 19 19:49:08.440: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:09.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:15.422: INFO: rc: 1
Oct 19 19:49:15.422: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:16.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:22.435: INFO: rc: 1
Oct 19 19:49:22.435: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:23.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:29.467: INFO: rc: 1
Oct 19 19:49:29.467: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:30.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:36.409: INFO: rc: 1
Oct 19 19:49:36.409: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:37.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:43.434: INFO: rc: 1
Oct 19 19:49:43.434: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:44.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:50.435: INFO: rc: 1
Oct 19 19:49:50.435: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:51.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:49:57.413: INFO: rc: 1
Oct 19 19:49:57.413: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:49:58.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:50:04.427: INFO: rc: 1
Oct 19 19:50:04.428: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:50:05.251: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:50:11.425: INFO: rc: 1
Oct 19 19:50:11.425: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:50:11.425: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 19 19:50:17.607: INFO: rc: 1
Oct 19 19:50:17.607: INFO: Service reachability failing with error: error running /tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5142 exec execpod-affinityttk7c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 19 19:50:17.607: FAIL: Unexpected error:
    <*errors.errorString | 0xc004e561b0>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
occurred

... skipping 171 lines ...
• Failure [161.970 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:50:17.607: Unexpected error:
      <*errors.errorString | 0xc004e561b0>: {
          s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":266,"failed":3,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}
Oct 19 19:50:33.308: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
Oct 19 19:45:10.257: INFO: PersistentVolumeClaim pvc-tpzdl found and phase=Bound (107.256217ms)
Oct 19 19:45:10.257: INFO: Waiting up to 3m0s for PersistentVolume nfs-dkt8l to have phase Bound
Oct 19 19:45:10.364: INFO: PersistentVolume nfs-dkt8l found and phase=Bound (107.348807ms)
STEP: Checking pod has write access to PersistentVolume
Oct 19 19:45:10.579: INFO: Creating nfs test pod
Oct 19 19:45:10.688: INFO: Pod should terminate with exitcode 0 (success)
Oct 19 19:45:10.688: INFO: Waiting up to 5m0s for pod "pvc-tester-pdtvm" in namespace "pv-7908" to be "Succeeded or Failed"
Oct 19 19:45:10.796: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 108.003707ms
Oct 19 19:45:12.905: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216661241s
Oct 19 19:45:15.013: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324526471s
Oct 19 19:45:17.122: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433933692s
Oct 19 19:45:19.232: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543672845s
Oct 19 19:45:21.341: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652712116s
... skipping 133 lines ...
Oct 19 19:50:04.010: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.3216542s
Oct 19 19:50:06.120: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.43126229s
Oct 19 19:50:08.229: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.540311319s
Oct 19 19:50:10.338: INFO: Pod "pvc-tester-pdtvm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.650012768s
Oct 19 19:50:12.339: INFO: Deleting pod "pvc-tester-pdtvm" in namespace "pv-7908"
Oct 19 19:50:12.449: INFO: Wait up to 5m0s for pod "pvc-tester-pdtvm" to be fully deleted
Oct 19 19:50:20.667: FAIL: Unexpected error:
    <*errors.errorString | 0xc003850700>: {
        s: "pod \"pvc-tester-pdtvm\" did not exit with Success: pod \"pvc-tester-pdtvm\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-pdtvm\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-pdtvm" did not exit with Success: pod "pvc-tester-pdtvm" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-pdtvm" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc002454dc0, 0x779f8f8, 0xc0020a7a20, 0xc0044eb6a9, 0x7, 0xc00306f680, 0xc001c0a1c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.2()
... skipping 22 lines ...
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:45:07 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-7908/nfs-server to ip-172-20-52-34.eu-west-1.compute.internal
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:45:07 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:45:07 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Created: Created container nfs-server
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:45:07 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Started: Started container nfs-server
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:45:10 +0000 UTC - event for pvc-tester-pdtvm: {default-scheduler } Scheduled: Successfully assigned pv-7908/pvc-tester-pdtvm to ip-172-20-35-5.eu-west-1.compute.internal
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:47:13 +0000 UTC - event for pvc-tester-pdtvm: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-k424d]: timed out waiting for the condition
Oct 19 19:50:37.317: INFO: At 2021-10-19 19:48:14 +0000 UTC - event for pvc-tester-pdtvm: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-dkt8l" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.100:/exports /var/lib/kubelet/pods/399306b8-f601-4615-8022-dd7cb9c1c263/volumes/kubernetes.io~nfs/nfs-dkt8l
Output: mount.nfs: Connection timed out

Oct 19 19:50:37.317: INFO: At 2021-10-19 19:50:20 +0000 UTC - event for nfs-server: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Killing: Stopping container nfs-server
Oct 19 19:50:37.425: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access  [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169

      Oct 19 19:50:20.667: Unexpected error:
          <*errors.errorString | 0xc003850700>: {
              s: "pod \"pvc-tester-pdtvm\" did not exit with Success: pod \"pvc-tester-pdtvm\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-pdtvm\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-pdtvm" did not exit with Success: pod "pvc-tester-pdtvm" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-pdtvm" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":33,"skipped":180,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access "]}
Oct 19 19:50:41.527: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 158 lines ...
Oct 19 19:41:06.030: INFO: Running '/tmp/kubectl1810586729/kubectl --server=https://api.e2e-e05d2a908c-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2299 create -f -'
Oct 19 19:41:06.675: INFO: stderr: ""
Oct 19 19:41:06.675: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct 19 19:41:06.675: INFO: Waiting for all frontend pods to be Running.
Oct 19 19:41:11.826: INFO: Waiting for frontend to serve content.
Oct 19 19:41:41.935: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:42:17.043: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.1.28:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:42:52.154: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:43:27.261: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:44:02.369: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:44:37.478: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:45:12.585: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:45:47.692: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.1.28:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:46:22.808: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.1.28:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:46:57.918: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:47:33.024: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:48:08.131: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:48:43.238: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:49:18.346: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:49:53.454: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:50:28.561: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.1.28:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:51:03.668: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusm

FailureCerror trying to reach service: dial tcp 100.96.2.33:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:51:38.777: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.4.213:80: i/o timeout"ServiceUnavailable0�"
Oct 19 19:51:43.778: FAIL: Frontend service did not start serving content in 600 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372 +0x159
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000fb2900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 61 lines ...
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-d7qqr: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Started: Started container replica
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-d7qqr: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-d7qqr: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Created: Created container replica
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-t6z94: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Created: Created container replica
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-t6z94: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Started: Started container replica
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:07 +0000 UTC - event for agnhost-replica-6bcf79b489-t6z94: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:41:46 +0000 UTC - event for agnhost-replica-6bcf79b489-t6z94: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} BackOff: Back-off restarting failed container
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:42:50 +0000 UTC - event for agnhost-replica-6bcf79b489-d7qqr: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} BackOff: Back-off restarting failed container
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:51:45 +0000 UTC - event for frontend-685fc574d5-pb69g: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:51:45 +0000 UTC - event for frontend-685fc574d5-rhdb2: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:51:45 +0000 UTC - event for frontend-685fc574d5-zl6hs: {kubelet ip-172-20-52-34.eu-west-1.compute.internal} Killing: Stopping container guestbook-frontend
Oct 19 19:51:47.121: INFO: At 2021-10-19 19:51:46 +0000 UTC - event for agnhost-primary-5db8ddd565-4m9fj: {kubelet ip-172-20-55-71.eu-west-1.compute.internal} Killing: Stopping container primary
Oct 19 19:51:47.227: INFO: POD                               NODE                                        PHASE    GRACE  CONDITIONS
Oct 19 19:51:47.227: INFO: agnhost-primary-5db8ddd565-4m9fj  ip-172-20-55-71.eu-west-1.compute.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:41:05 +0000 UTC  }]
... skipping 132 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 19 19:51:43.778: Frontend service did not start serving content in 600 seconds.

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":23,"skipped":162,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Oct 19 19:51:51.372: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1019 19:47:17.089386    5403 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 19 19:52:17.301: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 19 19:52:17.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6541" for this suite.


... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}
Oct 19 19:53:14.819: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
Oct 19 19:38:02.812: INFO: Unable to read wheezy_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:38:32.918: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:39:03.024: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:39:33.131: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:40:03.236: INFO: Unable to read jessie_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:40:33.342: INFO: Unable to read jessie_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:40:33.342: INFO: Lookups using dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 19 19:41:08.448: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:41:38.556: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:42:08.661: INFO: Unable to read wheezy_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:42:38.767: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:43:08.874: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:43:38.980: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:44:09.085: INFO: Unable to read jessie_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:44:39.191: INFO: Unable to read jessie_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:44:39.191: INFO: Lookups using dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 19 19:45:13.450: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:45:43.555: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:46:13.661: INFO: Unable to read wheezy_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:46:43.766: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:47:13.872: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:47:43.978: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:48:14.084: INFO: Unable to read jessie_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:48:44.190: INFO: Unable to read jessie_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:48:44.190: INFO: Lookups using dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 19 19:49:18.449: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:49:48.555: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:50:18.662: INFO: Unable to read wheezy_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:50:48.768: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:51:18.873: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:51:48.979: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:52:19.086: INFO: Unable to read jessie_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:52:49.191: INFO: Unable to read jessie_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:52:49.191: INFO: Lookups using dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 19 19:53:19.297: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:53:49.403: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:54:19.509: INFO: Unable to read wheezy_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:54:49.616: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:55:19.722: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:55:49.828: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:56:19.934: INFO: Unable to read jessie_udp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:56:50.041: INFO: Unable to read jessie_tcp@PodARecord from pod dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87: the server is currently unable to handle the request (get pods dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87)
Oct 19 19:56:50.041: INFO: Lookups using dns-874/dns-test-80d08776-7037-4ebf-9d4f-d1ee6522de87 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-874.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 19 19:56:50.041: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 140 lines ...
• Failure [1235.159 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 19 19:56:50.041: Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":202,"failed":2,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Oct 19 19:56:54.715: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-803a815f-b66a-4e91-a5b8-cfba6fa36646]
STEP: Verifying pods for RC slow-terminating-unready-pod
Oct 19 19:44:04.779: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Oct 19 19:44:39.312: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:45:11.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:45:43.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:46:15.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:46:47.631: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:47:19.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:47:51.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:48:23.631: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:48:55.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:49:27.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:49:59.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:50:31.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:51:03.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:51:35.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:52:07.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:52:39.632: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:53:11.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:53:43.635: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:54:15.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:54:47.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:55:19.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:55:51.635: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:56:23.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:56:55.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:57:27.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:57:59.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:58:31.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:59:03.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 19:59:35.634: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 20:00:07.633: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 20:00:37.952: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-qgfr4]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-qgfr4)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [slow-terminating-unready-pod]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770269444, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.35.5", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc00211cd98), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"slow-terminating-unready-pod", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002bcd860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c078fb)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct 19 20:00:37.953: FAIL: Unexpected error:
    <*errors.errorString | 0xc0041f4810>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.21()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688 +0xb99
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003427500)
... skipping 12 lines ...
STEP: Found 7 events.
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:04 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulCreate: Created pod: slow-terminating-unready-pod-qgfr4
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:04 +0000 UTC - event for slow-terminating-unready-pod-qgfr4: {default-scheduler } Scheduled: Successfully assigned services-4181/slow-terminating-unready-pod-qgfr4 to ip-172-20-35-5.eu-west-1.compute.internal
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:05 +0000 UTC - event for slow-terminating-unready-pod-qgfr4: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:05 +0000 UTC - event for slow-terminating-unready-pod-qgfr4: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Created: Created container slow-terminating-unready-pod
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:05 +0000 UTC - event for slow-terminating-unready-pod-qgfr4: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Started: Started container slow-terminating-unready-pod
Oct 19 20:00:38.505: INFO: At 2021-10-19 19:44:05 +0000 UTC - event for slow-terminating-unready-pod-qgfr4: {kubelet ip-172-20-35-5.eu-west-1.compute.internal} Unhealthy: Readiness probe failed: 
Oct 19 20:00:38.505: INFO: At 2021-10-19 20:00:38 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulDelete: Deleted pod: slow-terminating-unready-pod-qgfr4
Oct 19 20:00:38.611: INFO: POD                                 NODE                                       PHASE    GRACE  CONDITIONS
Oct 19 20:00:38.611: INFO: slow-terminating-unready-pod-qgfr4  ip-172-20-35-5.eu-west-1.compute.internal  Running  600s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:44:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:44:04 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:44:04 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-19 19:44:04 +0000 UTC  }]
Oct 19 20:00:38.611: INFO: 
Oct 19 20:00:38.718: INFO: 
Logging node info for node ip-172-20-35-5.eu-west-1.compute.internal
... skipping 101 lines ...
• Failure [998.793 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624

  Oct 19 20:00:37.953: Unexpected error:
      <*errors.errorString | 0xc0041f4810>: {
          s: "failed to wait for pods responding: timed out waiting for the condition",
      }
      failed to wait for pods responding: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688
------------------------------
{"msg":"FAILED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":30,"skipped":212,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-api-machinery] AdmissionWebhook [Priv