This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-07 16:27
Elapsed36m32s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 134 lines ...
I1007 16:27:41.811690    4694 up.go:43] Cleaning up any leaked resources from previous cluster
I1007 16:27:41.811727    4694 dumplogs.go:40] /logs/artifacts/553b6bac-278b-11ec-8be6-fe96b6157dda/kops toolbox dump --name e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I1007 16:27:41.826111    4712 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1007 16:27:41.826209    4712 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io" not found
W1007 16:27:42.301573    4694 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1007 16:27:42.301629    4694 down.go:48] /logs/artifacts/553b6bac-278b-11ec-8be6-fe96b6157dda/kops delete cluster --name e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --yes
I1007 16:27:42.315581    4721 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1007 16:27:42.315802    4721 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io" not found
I1007 16:27:42.816088    4694 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/07 16:27:42 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1007 16:27:42.825885    4694 http.go:37] curl https://ip.jsb.workers.dev
I1007 16:27:42.913028    4694 up.go:144] /logs/artifacts/553b6bac-278b-11ec-8be6-fe96b6157dda/kops create cluster --name e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=amazon/amzn2-ami-kernel-5.10-hvm-2.0.20211001.1-x86_64-gp2 --channel=alpha --networking=flannel --container-runtime=containerd --admin-access 34.135.157.253/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I1007 16:27:42.927171    4732 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1007 16:27:42.927252    4732 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1007 16:27:42.970058    4732 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1007 16:27:43.465972    4732 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I1007 16:28:11.924211    4694 up.go:181] /logs/artifacts/553b6bac-278b-11ec-8be6-fe96b6157dda/kops validate cluster --name e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1007 16:28:11.935776    4750 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1007 16:28:11.935851    4750 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io

W1007 16:28:13.512450    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:28:23.548851    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:28:33.586277    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:28:43.616822    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:28:53.664519    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:29:03.903734    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:29:13.933601    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:29:23.964160    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
W1007 16:29:33.983017    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:29:44.031164    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:29:54.067038    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:30:04.099008    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
W1007 16:30:14.144907    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:30:24.191925    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
W1007 16:30:34.243376    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:30:44.279571    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:30:54.311422    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
W1007 16:31:04.332994    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1007 16:31:14.367880    4750 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:31:24.398497    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:31:34.436863    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:31:44.474061    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:31:54.521280    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:32:04.565851    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1007 16:32:14.597238    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 13 lines ...
Pod	kube-system/coredns-5dc785954d-mw8xz					system-cluster-critical pod "coredns-5dc785954d-mw8xz" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-x4zb4				system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-x4zb4" is pending
Pod	kube-system/kube-flannel-ds-gcqsk					system-node-critical pod "kube-flannel-ds-gcqsk" is pending
Pod	kube-system/kube-flannel-ds-ksddw					system-node-critical pod "kube-flannel-ds-ksddw" is pending
Pod	kube-system/kube-proxy-ip-172-20-42-249.sa-east-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-42-249.sa-east-1.compute.internal" is pending

Validation Failed
W1007 16:32:28.338160    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 13 lines ...
Pod	kube-system/coredns-5dc785954d-mw8xz						system-cluster-critical pod "coredns-5dc785954d-mw8xz" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-x4zb4					system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-x4zb4" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-59-195.sa-east-1.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-59-195.sa-east-1.compute.internal" is pending
Pod	kube-system/kube-flannel-ds-8cvxf						system-node-critical pod "kube-flannel-ds-8cvxf" is pending
Pod	kube-system/kube-flannel-ds-sp4dp						system-node-critical pod "kube-flannel-ds-sp4dp" is pending

Validation Failed
W1007 16:32:40.717837    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-59-195.sa-east-1.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-56-61.sa-east-1.compute.internal	node "ip-172-20-56-61.sa-east-1.compute.internal" of role "node" is not ready

Validation Failed
W1007 16:32:53.382658    4750 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 641 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 806 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:24.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2048" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Oct  7 16:35:24.831: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 195 lines ...
Oct  7 16:35:23.850: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-28ac07a2-113d-4678-9c9a-7badf02c0fb4
STEP: Creating a pod to test consume configMaps
Oct  7 16:35:24.453: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5" in namespace "configmap-242" to be "Succeeded or Failed"
Oct  7 16:35:24.597: INFO: Pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.092168ms
Oct  7 16:35:26.741: INFO: Pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287730756s
Oct  7 16:35:28.886: INFO: Pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433172175s
Oct  7 16:35:31.031: INFO: Pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.578212955s
STEP: Saw pod success
Oct  7 16:35:31.031: INFO: Pod "pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5" satisfied condition "Succeeded or Failed"
Oct  7 16:35:31.175: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:35:31.487: INFO: Waiting for pod pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5 to disappear
Oct  7 16:35:31.631: INFO: Pod pod-configmaps-a0615e90-a3ae-4df6-9e95-8d63a05974c5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.209 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:32.081: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:35:24.013: INFO: Waiting up to 5m0s for pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e" in namespace "downward-api-609" to be "Succeeded or Failed"
Oct  7 16:35:24.172: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 158.781518ms
Oct  7 16:35:26.315: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302084486s
Oct  7 16:35:28.459: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446155446s
Oct  7 16:35:30.603: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589736337s
Oct  7 16:35:32.747: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.733588523s
STEP: Saw pod success
Oct  7 16:35:32.747: INFO: Pod "metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e" satisfied condition "Succeeded or Failed"
Oct  7 16:35:32.890: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e container client-container: <nil>
STEP: delete the pod
Oct  7 16:35:33.200: INFO: Waiting for pod metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e to disappear
Oct  7 16:35:33.343: INFO: Pod metadata-volume-a6f1029c-cadd-4c22-b76b-8a8eea869d5e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.930 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:33.788: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 28 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  7 16:35:25.016: INFO: Waiting up to 5m0s for pod "pod-718dabda-5a9c-471e-ab37-525ceb135757" in namespace "emptydir-9679" to be "Succeeded or Failed"
Oct  7 16:35:25.160: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757": Phase="Pending", Reason="", readiness=false. Elapsed: 143.61826ms
Oct  7 16:35:27.305: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288607167s
Oct  7 16:35:29.450: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433441648s
Oct  7 16:35:31.594: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578077517s
Oct  7 16:35:33.739: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722550994s
STEP: Saw pod success
Oct  7 16:35:33.739: INFO: Pod "pod-718dabda-5a9c-471e-ab37-525ceb135757" satisfied condition "Succeeded or Failed"
Oct  7 16:35:33.882: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-718dabda-5a9c-471e-ab37-525ceb135757 container test-container: <nil>
STEP: delete the pod
Oct  7 16:35:34.191: INFO: Waiting for pod pod-718dabda-5a9c-471e-ab37-525ceb135757 to disappear
Oct  7 16:35:34.335: INFO: Pod pod-718dabda-5a9c-471e-ab37-525ceb135757 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:34.786: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
• [SLOW TEST:13.470 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:36.326: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
Oct  7 16:35:24.772: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct  7 16:35:24.916: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  7 16:35:25.346: INFO: Waiting up to 5m0s for pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db" in namespace "security-context-6171" to be "Succeeded or Failed"
Oct  7 16:35:25.491: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Pending", Reason="", readiness=false. Elapsed: 144.087299ms
Oct  7 16:35:27.636: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289181856s
Oct  7 16:35:29.780: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433221756s
Oct  7 16:35:31.923: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576197796s
Oct  7 16:35:34.067: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720354903s
Oct  7 16:35:36.212: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.865352979s
STEP: Saw pod success
Oct  7 16:35:36.212: INFO: Pod "security-context-7cb10d2d-accc-467b-8720-222f40cda6db" satisfied condition "Succeeded or Failed"
Oct  7 16:35:36.355: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod security-context-7cb10d2d-accc-467b-8720-222f40cda6db container test-container: <nil>
STEP: delete the pod
Oct  7 16:35:36.646: INFO: Waiting for pod security-context-7cb10d2d-accc-467b-8720-222f40cda6db to disappear
Oct  7 16:35:36.790: INFO: Pod security-context-7cb10d2d-accc-467b-8720-222f40cda6db no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.321 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:37.231: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
Oct  7 16:35:25.209: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-c67e5b70-df21-4fc5-9bc4-eb68becc4efc
STEP: Creating a pod to test consume configMaps
Oct  7 16:35:25.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11" in namespace "configmap-3384" to be "Succeeded or Failed"
Oct  7 16:35:25.931: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Pending", Reason="", readiness=false. Elapsed: 144.125759ms
Oct  7 16:35:28.078: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290350447s
Oct  7 16:35:30.223: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435327118s
Oct  7 16:35:32.366: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579229826s
Oct  7 16:35:34.511: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723558323s
Oct  7 16:35:36.655: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.868151479s
STEP: Saw pod success
Oct  7 16:35:36.655: INFO: Pod "pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11" satisfied condition "Succeeded or Failed"
Oct  7 16:35:36.799: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:35:37.095: INFO: Waiting for pod pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11 to disappear
Oct  7 16:35:37.238: INFO: Pod pod-configmaps-f1bf858f-8f37-4d51-9877-3482bbe15d11 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.731 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":23,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:37.727: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-612dbfe1-9b59-47fc-9896-c98b7023accc
STEP: Creating a pod to test consume configMaps
Oct  7 16:35:34.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117" in namespace "configmap-1511" to be "Succeeded or Failed"
Oct  7 16:35:34.961: INFO: Pod "pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117": Phase="Pending", Reason="", readiness=false. Elapsed: 142.982709ms
Oct  7 16:35:37.104: INFO: Pod "pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286059216s
STEP: Saw pod success
Oct  7 16:35:37.104: INFO: Pod "pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117" satisfied condition "Succeeded or Failed"
Oct  7 16:35:37.246: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:35:37.540: INFO: Waiting for pod pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117 to disappear
Oct  7 16:35:37.683: INFO: Pod pod-configmaps-7ef4a2ef-d559-476e-a331-f4d4dbd5d117 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:37.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1511" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:37.994: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 27 lines ...
Oct  7 16:35:24.820: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-f7fb12aa-ec30-4f3b-ba73-ce2bb94cdca5
STEP: Creating a pod to test consume configMaps
Oct  7 16:35:25.417: INFO: Waiting up to 5m0s for pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32" in namespace "configmap-3685" to be "Succeeded or Failed"
Oct  7 16:35:25.561: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 143.813429ms
Oct  7 16:35:27.706: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289108627s
Oct  7 16:35:29.851: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434283007s
Oct  7 16:35:31.995: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578361787s
Oct  7 16:35:34.139: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722596514s
Oct  7 16:35:36.284: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867139159s
Oct  7 16:35:38.428: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.011423816s
STEP: Saw pod success
Oct  7 16:35:38.428: INFO: Pod "pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32" satisfied condition "Succeeded or Failed"
Oct  7 16:35:38.572: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:35:38.894: INFO: Waiting for pod pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32 to disappear
Oct  7 16:35:39.037: INFO: Pod pod-configmaps-90301a3f-7f6c-43b3-8490-03a8a04b2e32 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.576 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Oct  7 16:35:26.255: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Pending, waiting for it to be Running (with Ready = true)
Oct  7 16:35:28.256: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Pending, waiting for it to be Running (with Ready = true)
Oct  7 16:35:30.255: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Pending, waiting for it to be Running (with Ready = true)
Oct  7 16:35:32.255: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Pending, waiting for it to be Running (with Ready = true)
Oct  7 16:35:34.255: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Pending, waiting for it to be Running (with Ready = true)
Oct  7 16:35:36.255: INFO: The status of Pod server-envvars-24101d39-4ca9-4d71-afd9-54e520a9be6b is Running (Ready = true)
Oct  7 16:35:36.689: INFO: Waiting up to 5m0s for pod "client-envvars-a2954458-de5f-424f-839a-326bf00879e2" in namespace "pods-4148" to be "Succeeded or Failed"
Oct  7 16:35:36.833: INFO: Pod "client-envvars-a2954458-de5f-424f-839a-326bf00879e2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.333631ms
Oct  7 16:35:38.976: INFO: Pod "client-envvars-a2954458-de5f-424f-839a-326bf00879e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286748195s
STEP: Saw pod success
Oct  7 16:35:38.976: INFO: Pod "client-envvars-a2954458-de5f-424f-839a-326bf00879e2" satisfied condition "Succeeded or Failed"
Oct  7 16:35:39.120: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod client-envvars-a2954458-de5f-424f-839a-326bf00879e2 container env3cont: <nil>
STEP: delete the pod
Oct  7 16:35:39.413: INFO: Waiting for pod client-envvars-a2954458-de5f-424f-839a-326bf00879e2 to disappear
Oct  7 16:35:39.556: INFO: Pod client-envvars-a2954458-de5f-424f-839a-326bf00879e2 no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.193 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:40.010: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:41.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1983" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct  7 16:35:23.394: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct  7 16:35:23.537: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  7 16:35:23.976: INFO: Waiting up to 5m0s for pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4" in namespace "downward-api-1483" to be "Succeeded or Failed"
Oct  7 16:35:24.128: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 151.803528ms
Oct  7 16:35:26.274: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298056826s
Oct  7 16:35:28.417: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441532264s
Oct  7 16:35:30.568: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591586977s
Oct  7 16:35:32.712: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.736147323s
Oct  7 16:35:34.856: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.88055853s
Oct  7 16:35:37.000: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.024229086s
Oct  7 16:35:39.148: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.171803732s
Oct  7 16:35:41.292: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.315945688s
STEP: Saw pod success
Oct  7 16:35:41.292: INFO: Pod "downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4" satisfied condition "Succeeded or Failed"
Oct  7 16:35:41.436: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4 container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:35:41.734: INFO: Waiting for pod downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4 to disappear
Oct  7 16:35:41.877: INFO: Pod downward-api-81e70578-89bb-4b76-baa5-f28c9a010be4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 21 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct  7 16:35:23.784: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  7 16:35:24.071: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gc5c
STEP: Creating a pod to test subpath
Oct  7 16:35:24.254: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gc5c" in namespace "provisioning-621" to be "Succeeded or Failed"
Oct  7 16:35:24.401: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 146.763259ms
Oct  7 16:35:26.546: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291296217s
Oct  7 16:35:28.691: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436444934s
Oct  7 16:35:30.835: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580282427s
Oct  7 16:35:32.978: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724162263s
Oct  7 16:35:35.123: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86861705s
Oct  7 16:35:37.268: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013470226s
Oct  7 16:35:39.413: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.158883184s
Oct  7 16:35:41.560: INFO: Pod "pod-subpath-test-inlinevolume-gc5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.305442629s
STEP: Saw pod success
Oct  7 16:35:41.560: INFO: Pod "pod-subpath-test-inlinevolume-gc5c" satisfied condition "Succeeded or Failed"
Oct  7 16:35:41.703: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-gc5c container test-container-volume-inlinevolume-gc5c: <nil>
STEP: delete the pod
Oct  7 16:35:42.005: INFO: Waiting for pod pod-subpath-test-inlinevolume-gc5c to disappear
Oct  7 16:35:42.149: INFO: Pod pod-subpath-test-inlinevolume-gc5c no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gc5c
Oct  7 16:35:42.149: INFO: Deleting pod "pod-subpath-test-inlinevolume-gc5c" in namespace "provisioning-621"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:42.742: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 136 lines ...
Oct  7 16:35:44.237: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.046 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:45.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2814" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":4,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:46.114: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 49 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Oct  7 16:35:24.504: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  7 16:35:24.832: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4w5k
STEP: Creating a pod to test subpath
Oct  7 16:35:25.016: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4w5k" in namespace "provisioning-6333" to be "Succeeded or Failed"
Oct  7 16:35:25.159: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 143.615709ms
Oct  7 16:35:27.305: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289048757s
Oct  7 16:35:29.449: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433345718s
Oct  7 16:35:31.593: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577099307s
Oct  7 16:35:33.739: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722866674s
Oct  7 16:35:35.883: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86706126s
Oct  7 16:35:38.028: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 13.012507795s
Oct  7 16:35:40.175: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 15.159621474s
Oct  7 16:35:42.320: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 17.304344748s
Oct  7 16:35:44.464: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Pending", Reason="", readiness=false. Elapsed: 19.448178235s
Oct  7 16:35:46.609: INFO: Pod "pod-subpath-test-inlinevolume-4w5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.59280416s
STEP: Saw pod success
Oct  7 16:35:46.609: INFO: Pod "pod-subpath-test-inlinevolume-4w5k" satisfied condition "Succeeded or Failed"
Oct  7 16:35:46.753: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-4w5k container test-container-subpath-inlinevolume-4w5k: <nil>
STEP: delete the pod
Oct  7 16:35:47.047: INFO: Waiting for pod pod-subpath-test-inlinevolume-4w5k to disappear
Oct  7 16:35:47.191: INFO: Pod pod-subpath-test-inlinevolume-4w5k no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4w5k
Oct  7 16:35:47.191: INFO: Deleting pod "pod-subpath-test-inlinevolume-4w5k" in namespace "provisioning-6333"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct  7 16:35:42.551: INFO: PersistentVolumeClaim pvc-xlpmb found but phase is Pending instead of Bound.
Oct  7 16:35:44.715: INFO: PersistentVolumeClaim pvc-xlpmb found and phase=Bound (2.307643516s)
Oct  7 16:35:44.715: INFO: Waiting up to 3m0s for PersistentVolume local-mwsnh to have phase Bound
Oct  7 16:35:44.863: INFO: PersistentVolume local-mwsnh found and phase=Bound (148.35078ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zjsq
STEP: Creating a pod to test subpath
Oct  7 16:35:45.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zjsq" in namespace "provisioning-6139" to be "Succeeded or Failed"
Oct  7 16:35:45.458: INFO: Pod "pod-subpath-test-preprovisionedpv-zjsq": Phase="Pending", Reason="", readiness=false. Elapsed: 143.418669ms
Oct  7 16:35:47.602: INFO: Pod "pod-subpath-test-preprovisionedpv-zjsq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287288104s
STEP: Saw pod success
Oct  7 16:35:47.602: INFO: Pod "pod-subpath-test-preprovisionedpv-zjsq" satisfied condition "Succeeded or Failed"
Oct  7 16:35:47.748: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zjsq container test-container-volume-preprovisionedpv-zjsq: <nil>
STEP: delete the pod
Oct  7 16:35:48.047: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zjsq to disappear
Oct  7 16:35:48.190: INFO: Pod pod-subpath-test-preprovisionedpv-zjsq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zjsq
Oct  7 16:35:48.191: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zjsq" in namespace "provisioning-6139"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:35:42.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:28.613 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:35:46.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-db7f89f4-1541-40f4-9d50-001386e3f730
STEP: Creating a pod to test consume secrets
Oct  7 16:35:47.728: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d" in namespace "projected-3862" to be "Succeeded or Failed"
Oct  7 16:35:47.871: INFO: Pod "pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d": Phase="Pending", Reason="", readiness=false. Elapsed: 142.635569ms
Oct  7 16:35:50.014: INFO: Pod "pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28598762s
Oct  7 16:35:52.158: INFO: Pod "pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.429222613s
STEP: Saw pod success
Oct  7 16:35:52.158: INFO: Pod "pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d" satisfied condition "Succeeded or Failed"
Oct  7 16:35:52.301: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:35:52.592: INFO: Waiting for pod pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d to disappear
Oct  7 16:35:52.735: INFO: Pod pod-projected-secrets-17767a0f-46ea-4f7c-b613-1c6eaf8e116d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:7.056 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":5,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:53.213: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
• [SLOW TEST:11.075 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:53.927: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:53.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-6051" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:35:28.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
• [SLOW TEST:29.024 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:57.397: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 114 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1345
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:35:58.841: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:35:59.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-993" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:00.195: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 134 lines ...
• [SLOW TEST:42.667 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:05.493: INFO: Only supported for providers [vsphere] (not aws)
... skipping 86 lines ...
• [SLOW TEST:43.354 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 24 lines ...
Oct  7 16:35:56.972: INFO: PersistentVolumeClaim pvc-49l5w found but phase is Pending instead of Bound.
Oct  7 16:35:59.117: INFO: PersistentVolumeClaim pvc-49l5w found and phase=Bound (13.014456166s)
Oct  7 16:35:59.117: INFO: Waiting up to 3m0s for PersistentVolume local-x2l7v to have phase Bound
Oct  7 16:35:59.260: INFO: PersistentVolume local-x2l7v found and phase=Bound (142.755429ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mxlb
STEP: Creating a pod to test subpath
Oct  7 16:35:59.717: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mxlb" in namespace "provisioning-7411" to be "Succeeded or Failed"
Oct  7 16:35:59.860: INFO: Pod "pod-subpath-test-preprovisionedpv-mxlb": Phase="Pending", Reason="", readiness=false. Elapsed: 142.88694ms
Oct  7 16:36:02.003: INFO: Pod "pod-subpath-test-preprovisionedpv-mxlb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286689945s
Oct  7 16:36:04.151: INFO: Pod "pod-subpath-test-preprovisionedpv-mxlb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433901141s
STEP: Saw pod success
Oct  7 16:36:04.151: INFO: Pod "pod-subpath-test-preprovisionedpv-mxlb" satisfied condition "Succeeded or Failed"
Oct  7 16:36:04.294: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-mxlb container test-container-subpath-preprovisionedpv-mxlb: <nil>
STEP: delete the pod
Oct  7 16:36:04.595: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mxlb to disappear
Oct  7 16:36:04.740: INFO: Pod pod-subpath-test-preprovisionedpv-mxlb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mxlb
Oct  7 16:36:04.740: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mxlb" in namespace "provisioning-7411"
... skipping 167 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:36:14.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2492" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:14.408: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":23,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:35:58.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 346 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:36:16.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4436" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:16.841: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:30.168 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:18.011: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 179 lines ...
Oct  7 16:36:12.560: INFO: PersistentVolumeClaim pvc-hf8v6 found but phase is Pending instead of Bound.
Oct  7 16:36:14.703: INFO: PersistentVolumeClaim pvc-hf8v6 found and phase=Bound (15.157128196s)
Oct  7 16:36:14.703: INFO: Waiting up to 3m0s for PersistentVolume local-8dclc to have phase Bound
Oct  7 16:36:14.846: INFO: PersistentVolume local-8dclc found and phase=Bound (142.979068ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h8n8
STEP: Creating a pod to test subpath
Oct  7 16:36:15.277: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h8n8" in namespace "provisioning-8772" to be "Succeeded or Failed"
Oct  7 16:36:15.420: INFO: Pod "pod-subpath-test-preprovisionedpv-h8n8": Phase="Pending", Reason="", readiness=false. Elapsed: 143.089879ms
Oct  7 16:36:17.564: INFO: Pod "pod-subpath-test-preprovisionedpv-h8n8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287365623s
STEP: Saw pod success
Oct  7 16:36:17.564: INFO: Pod "pod-subpath-test-preprovisionedpv-h8n8" satisfied condition "Succeeded or Failed"
Oct  7 16:36:17.707: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-h8n8 container test-container-subpath-preprovisionedpv-h8n8: <nil>
STEP: delete the pod
Oct  7 16:36:18.013: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h8n8 to disappear
Oct  7 16:36:18.157: INFO: Pod pod-subpath-test-preprovisionedpv-h8n8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h8n8
Oct  7 16:36:18.157: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h8n8" in namespace "provisioning-8772"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:21.103: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct  7 16:36:16.375: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  7 16:36:16.375: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tzxl
STEP: Creating a pod to test subpath
Oct  7 16:36:16.526: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tzxl" in namespace "provisioning-8150" to be "Succeeded or Failed"
Oct  7 16:36:16.670: INFO: Pod "pod-subpath-test-inlinevolume-tzxl": Phase="Pending", Reason="", readiness=false. Elapsed: 143.872609ms
Oct  7 16:36:18.814: INFO: Pod "pod-subpath-test-inlinevolume-tzxl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288318261s
Oct  7 16:36:20.962: INFO: Pod "pod-subpath-test-inlinevolume-tzxl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435697516s
Oct  7 16:36:23.113: INFO: Pod "pod-subpath-test-inlinevolume-tzxl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.587241919s
STEP: Saw pod success
Oct  7 16:36:23.113: INFO: Pod "pod-subpath-test-inlinevolume-tzxl" satisfied condition "Succeeded or Failed"
Oct  7 16:36:23.259: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-tzxl container test-container-volume-inlinevolume-tzxl: <nil>
STEP: delete the pod
Oct  7 16:36:23.580: INFO: Waiting for pod pod-subpath-test-inlinevolume-tzxl to disappear
Oct  7 16:36:23.724: INFO: Pod pod-subpath-test-inlinevolume-tzxl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tzxl
Oct  7 16:36:23.724: INFO: Deleting pod "pod-subpath-test-inlinevolume-tzxl" in namespace "provisioning-8150"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:24.335: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:36:07.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 72 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:62.829 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:27.765: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:828
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:28.254: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:36:29.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5189" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:29.668: INFO: Only supported for providers [openstack] (not aws)
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:34.036: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 263 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:37.685: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
STEP: Destroying namespace "services-5223" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:36:00.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:38.669 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:36:40.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7949" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:41.255: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 169 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:43.294: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Oct  7 16:36:39.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  7 16:36:40.222: INFO: Waiting up to 5m0s for pod "pod-101f674d-ec53-46d6-8bfd-0629208d7c90" in namespace "emptydir-2232" to be "Succeeded or Failed"
Oct  7 16:36:40.364: INFO: Pod "pod-101f674d-ec53-46d6-8bfd-0629208d7c90": Phase="Pending", Reason="", readiness=false. Elapsed: 142.617209ms
Oct  7 16:36:42.507: INFO: Pod "pod-101f674d-ec53-46d6-8bfd-0629208d7c90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.285606297s
STEP: Saw pod success
Oct  7 16:36:42.507: INFO: Pod "pod-101f674d-ec53-46d6-8bfd-0629208d7c90" satisfied condition "Succeeded or Failed"
Oct  7 16:36:42.650: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-101f674d-ec53-46d6-8bfd-0629208d7c90 container test-container: <nil>
STEP: delete the pod
Oct  7 16:36:42.950: INFO: Waiting for pod pod-101f674d-ec53-46d6-8bfd-0629208d7c90 to disappear
Oct  7 16:36:43.093: INFO: Pod pod-101f674d-ec53-46d6-8bfd-0629208d7c90 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:36:43.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2232" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:43.410: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 69 lines ...
• [SLOW TEST:6.805 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:48.586: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:51.831: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 34 lines ...
• [SLOW TEST:61.320 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:36:58.953: INFO: Only supported for providers [openstack] (not aws)
... skipping 42 lines ...
Oct  7 16:36:42.592: INFO: PersistentVolumeClaim pvc-89xz9 found but phase is Pending instead of Bound.
Oct  7 16:36:44.735: INFO: PersistentVolumeClaim pvc-89xz9 found and phase=Bound (4.430260387s)
Oct  7 16:36:44.736: INFO: Waiting up to 3m0s for PersistentVolume local-8qqkj to have phase Bound
Oct  7 16:36:44.878: INFO: PersistentVolume local-8qqkj found and phase=Bound (142.597378ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-v94c
STEP: Creating a pod to test subpath
Oct  7 16:36:45.313: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v94c" in namespace "provisioning-3788" to be "Succeeded or Failed"
Oct  7 16:36:45.456: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.440298ms
Oct  7 16:36:47.601: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28839948s
Oct  7 16:36:49.745: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432087303s
Oct  7 16:36:51.890: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576735304s
STEP: Saw pod success
Oct  7 16:36:51.890: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c" satisfied condition "Succeeded or Failed"
Oct  7 16:36:52.033: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-v94c container test-container-subpath-preprovisionedpv-v94c: <nil>
STEP: delete the pod
Oct  7 16:36:52.330: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v94c to disappear
Oct  7 16:36:52.475: INFO: Pod pod-subpath-test-preprovisionedpv-v94c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v94c
Oct  7 16:36:52.475: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v94c" in namespace "provisioning-3788"
STEP: Creating pod pod-subpath-test-preprovisionedpv-v94c
STEP: Creating a pod to test subpath
Oct  7 16:36:52.764: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v94c" in namespace "provisioning-3788" to be "Succeeded or Failed"
Oct  7 16:36:52.908: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.85936ms
Oct  7 16:36:55.051: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286895585s
STEP: Saw pod success
Oct  7 16:36:55.051: INFO: Pod "pod-subpath-test-preprovisionedpv-v94c" satisfied condition "Succeeded or Failed"
Oct  7 16:36:55.196: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-v94c container test-container-subpath-preprovisionedpv-v94c: <nil>
STEP: delete the pod
Oct  7 16:36:55.487: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v94c to disappear
Oct  7 16:36:55.634: INFO: Pod pod-subpath-test-preprovisionedpv-v94c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v94c
Oct  7 16:36:55.634: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v94c" in namespace "provisioning-3788"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:01.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-5626" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 423 lines ...
• [SLOW TEST:12.658 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:04.528: INFO: Only supported for providers [openstack] (not aws)
... skipping 39 lines ...
• [SLOW TEST:40.622 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":4,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:05.018: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-fxm7
STEP: Creating a pod to test atomic-volume-subpath
Oct  7 16:36:44.595: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fxm7" in namespace "subpath-1345" to be "Succeeded or Failed"
Oct  7 16:36:44.737: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Pending", Reason="", readiness=false. Elapsed: 142.460758ms
Oct  7 16:36:46.880: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 2.285568819s
Oct  7 16:36:49.024: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 4.429025913s
Oct  7 16:36:51.168: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 6.572901153s
Oct  7 16:36:53.311: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 8.716497776s
Oct  7 16:36:55.456: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 10.86103912s
Oct  7 16:36:57.604: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 13.009580722s
Oct  7 16:36:59.750: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 15.155047575s
Oct  7 16:37:01.900: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 17.305444036s
Oct  7 16:37:04.168: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Running", Reason="", readiness=true. Elapsed: 19.572687305s
Oct  7 16:37:06.319: INFO: Pod "pod-subpath-test-configmap-fxm7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.723749784s
STEP: Saw pod success
Oct  7 16:37:06.319: INFO: Pod "pod-subpath-test-configmap-fxm7" satisfied condition "Succeeded or Failed"
Oct  7 16:37:06.464: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-subpath-test-configmap-fxm7 container test-container-subpath-configmap-fxm7: <nil>
STEP: delete the pod
Oct  7 16:37:06.966: INFO: Waiting for pod pod-subpath-test-configmap-fxm7 to disappear
Oct  7 16:37:07.196: INFO: Pod pod-subpath-test-configmap-fxm7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-fxm7
Oct  7 16:37:07.196: INFO: Deleting pod "pod-subpath-test-configmap-fxm7" in namespace "subpath-1345"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:07.651: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
Oct  7 16:35:57.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W1007 16:35:58.280549    5508 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-9432" for this suite.


• [SLOW TEST:70.285 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":3,"skipped":10,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:07.740: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 97 lines ...
Oct  7 16:36:12.192: INFO: PersistentVolumeClaim csi-hostpath5nldz found but phase is Pending instead of Bound.
Oct  7 16:36:14.337: INFO: PersistentVolumeClaim csi-hostpath5nldz found but phase is Pending instead of Bound.
Oct  7 16:36:16.481: INFO: PersistentVolumeClaim csi-hostpath5nldz found but phase is Pending instead of Bound.
Oct  7 16:36:18.626: INFO: PersistentVolumeClaim csi-hostpath5nldz found and phase=Bound (47.340013046s)
STEP: Creating pod pod-subpath-test-dynamicpv-67f6
STEP: Creating a pod to test subpath
Oct  7 16:36:19.065: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-67f6" in namespace "provisioning-7539" to be "Succeeded or Failed"
Oct  7 16:36:19.210: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 144.455259ms
Oct  7 16:36:21.355: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289909994s
Oct  7 16:36:23.500: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435023727s
Oct  7 16:36:25.645: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579839429s
Oct  7 16:36:27.790: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72446416s
Oct  7 16:36:29.934: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869019042s
Oct  7 16:36:32.082: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.016904818s
Oct  7 16:36:34.231: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.165349751s
Oct  7 16:36:36.375: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.309821243s
Oct  7 16:36:38.520: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.455042885s
Oct  7 16:36:40.666: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.600472009s
Oct  7 16:36:42.811: INFO: Pod "pod-subpath-test-dynamicpv-67f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.745731566s
STEP: Saw pod success
Oct  7 16:36:42.811: INFO: Pod "pod-subpath-test-dynamicpv-67f6" satisfied condition "Succeeded or Failed"
Oct  7 16:36:42.955: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-67f6 container test-container-subpath-dynamicpv-67f6: <nil>
STEP: delete the pod
Oct  7 16:36:43.272: INFO: Waiting for pod pod-subpath-test-dynamicpv-67f6 to disappear
Oct  7 16:36:43.416: INFO: Pod pod-subpath-test-dynamicpv-67f6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-67f6
Oct  7 16:36:43.416: INFO: Deleting pod "pod-subpath-test-dynamicpv-67f6" in namespace "provisioning-7539"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:25.426 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:09.886: INFO: Only supported for providers [vsphere] (not aws)
... skipping 87 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-d754371a-50af-4fc2-a9ac-f0e834fc073b
STEP: Creating a pod to test consume secrets
Oct  7 16:37:10.965: INFO: Waiting up to 5m0s for pod "pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4" in namespace "secrets-3256" to be "Succeeded or Failed"
Oct  7 16:37:11.138: INFO: Pod "pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 172.707428ms
Oct  7 16:37:13.283: INFO: Pod "pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317561208s
STEP: Saw pod success
Oct  7 16:37:13.283: INFO: Pod "pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4" satisfied condition "Succeeded or Failed"
Oct  7 16:37:13.434: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4 container secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:37:13.742: INFO: Waiting for pod pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4 to disappear
Oct  7 16:37:13.888: INFO: Pod pod-secrets-da8fc5fa-c551-4d82-a2fb-0b7ea21af1d4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:13.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3256" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:14.287: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-05d7e53b-938f-4e2c-9edd-39efe5c07c11
STEP: Creating a pod to test consume configMaps
Oct  7 16:37:08.907: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2" in namespace "projected-9087" to be "Succeeded or Failed"
Oct  7 16:37:09.075: INFO: Pod "pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2": Phase="Pending", Reason="", readiness=false. Elapsed: 168.316688ms
Oct  7 16:37:11.231: INFO: Pod "pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324112152s
Oct  7 16:37:13.378: INFO: Pod "pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.471028552s
STEP: Saw pod success
Oct  7 16:37:13.378: INFO: Pod "pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2" satisfied condition "Succeeded or Failed"
Oct  7 16:37:13.522: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:37:13.842: INFO: Waiting for pod pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2 to disappear
Oct  7 16:37:13.988: INFO: Pod pod-projected-configmaps-bdf1e22b-730e-4d10-9d86-51c1b1bc83b2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.554 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:18.226: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 77 lines ...
• [SLOW TEST:27.688 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":4,"skipped":22,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:26.692: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:27.152: INFO: Only supported for providers [openstack] (not aws)
... skipping 109 lines ...
Oct  7 16:37:19.377: INFO: stdout: "Paused\n"
STEP: exposing RC
Oct  7 16:37:19.377: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-278 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379'
Oct  7 16:37:20.053: INFO: stderr: ""
Oct  7 16:37:20.054: INFO: stdout: "service/rm2 exposed\n"
Oct  7 16:37:20.197: INFO: Service rm2 in namespace kubectl-278 found.
Oct  7 16:37:22.341: INFO: Get endpoints failed (interval 2s): endpoints "rm2" not found
STEP: exposing service
Oct  7 16:37:24.484: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-278 expose service rm2 --name=rm3 --port=2345 --target-port=6379'
Oct  7 16:37:25.197: INFO: stderr: ""
Oct  7 16:37:25.198: INFO: stdout: "service/rm3 exposed\n"
Oct  7 16:37:25.343: INFO: Service rm3 in namespace kubectl-278 found.
[AfterEach] [sig-cli] Kubectl client
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-3629" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:29.384: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
Oct  7 16:36:28.516: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8392fr6wm
STEP: creating a claim
Oct  7 16:36:28.663: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-fqbw
STEP: Creating a pod to test subpath
Oct  7 16:36:29.108: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fqbw" in namespace "provisioning-8392" to be "Succeeded or Failed"
Oct  7 16:36:29.252: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 144.187909ms
Oct  7 16:36:31.397: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288788693s
Oct  7 16:36:33.542: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434561098s
Oct  7 16:36:35.689: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58097601s
Oct  7 16:36:37.839: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731295311s
Oct  7 16:36:39.984: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875865165s
... skipping 3 lines ...
Oct  7 16:36:48.564: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.455765278s
Oct  7 16:36:50.710: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.60168856s
Oct  7 16:36:52.855: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.747159149s
Oct  7 16:36:55.000: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Pending", Reason="", readiness=false. Elapsed: 25.892523255s
Oct  7 16:36:57.153: INFO: Pod "pod-subpath-test-dynamicpv-fqbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.045060807s
STEP: Saw pod success
Oct  7 16:36:57.153: INFO: Pod "pod-subpath-test-dynamicpv-fqbw" satisfied condition "Succeeded or Failed"
Oct  7 16:36:57.298: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-fqbw container test-container-volume-dynamicpv-fqbw: <nil>
STEP: delete the pod
Oct  7 16:36:57.604: INFO: Waiting for pod pod-subpath-test-dynamicpv-fqbw to disappear
Oct  7 16:36:57.749: INFO: Pod pod-subpath-test-dynamicpv-fqbw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fqbw
Oct  7 16:36:57.749: INFO: Deleting pod "pod-subpath-test-dynamicpv-fqbw" in namespace "provisioning-8392"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:29.987: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":2,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:32.092: INFO: Only supported for providers [gce gke] (not aws)
... skipping 41 lines ...
• [SLOW TEST:122.490 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:34.639: INFO: Only supported for providers [gce gke] (not aws)
... skipping 106 lines ...
Oct  7 16:37:27.456: INFO: PersistentVolumeClaim pvc-mmhw9 found but phase is Pending instead of Bound.
Oct  7 16:37:29.599: INFO: PersistentVolumeClaim pvc-mmhw9 found and phase=Bound (8.721746465s)
Oct  7 16:37:29.599: INFO: Waiting up to 3m0s for PersistentVolume local-slmh2 to have phase Bound
Oct  7 16:37:29.742: INFO: PersistentVolume local-slmh2 found and phase=Bound (142.557268ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k25w
STEP: Creating a pod to test subpath
Oct  7 16:37:30.175: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k25w" in namespace "provisioning-5209" to be "Succeeded or Failed"
Oct  7 16:37:30.319: INFO: Pod "pod-subpath-test-preprovisionedpv-k25w": Phase="Pending", Reason="", readiness=false. Elapsed: 143.897399ms
Oct  7 16:37:32.463: INFO: Pod "pod-subpath-test-preprovisionedpv-k25w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287522147s
STEP: Saw pod success
Oct  7 16:37:32.463: INFO: Pod "pod-subpath-test-preprovisionedpv-k25w" satisfied condition "Succeeded or Failed"
Oct  7 16:37:32.606: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-k25w container test-container-subpath-preprovisionedpv-k25w: <nil>
STEP: delete the pod
Oct  7 16:37:32.899: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k25w to disappear
Oct  7 16:37:33.042: INFO: Pod pod-subpath-test-preprovisionedpv-k25w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k25w
Oct  7 16:37:33.042: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k25w" in namespace "provisioning-5209"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:35.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6170" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":3,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:36.171: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-3503/configmap-test-78475465-ec68-4de8-b99a-68b194b294e5
STEP: Creating a pod to test consume configMaps
Oct  7 16:37:27.724: INFO: Waiting up to 5m0s for pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78" in namespace "configmap-3503" to be "Succeeded or Failed"
Oct  7 16:37:27.867: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78": Phase="Pending", Reason="", readiness=false. Elapsed: 143.445849ms
Oct  7 16:37:30.012: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288286395s
Oct  7 16:37:32.158: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434365894s
Oct  7 16:37:34.303: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578885903s
Oct  7 16:37:36.450: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.726619411s
STEP: Saw pod success
Oct  7 16:37:36.451: INFO: Pod "pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78" satisfied condition "Succeeded or Failed"
Oct  7 16:37:36.594: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78 container env-test: <nil>
STEP: delete the pod
Oct  7 16:37:36.891: INFO: Waiting for pod pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78 to disappear
Oct  7 16:37:37.035: INFO: Pod pod-configmaps-b67d7934-4f8c-4c5d-a82b-eb74c9f45b78 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.617 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:37.355: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
Oct  7 16:36:36.642: INFO: PersistentVolumeClaim csi-hostpath6nslt found but phase is Pending instead of Bound.
Oct  7 16:36:38.786: INFO: PersistentVolumeClaim csi-hostpath6nslt found but phase is Pending instead of Bound.
Oct  7 16:36:40.930: INFO: PersistentVolumeClaim csi-hostpath6nslt found but phase is Pending instead of Bound.
Oct  7 16:36:43.074: INFO: PersistentVolumeClaim csi-hostpath6nslt found and phase=Bound (6.575168881s)
STEP: Expanding non-expandable pvc
Oct  7 16:36:43.360: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  7 16:36:43.652: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:45.941: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:47.939: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:49.939: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:51.939: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:53.940: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:55.968: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:57.948: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:36:59.950: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:01.945: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:03.973: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:06.048: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:07.940: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:09.942: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:11.941: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:13.944: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:37:14.250: INFO: Error updating pvc csi-hostpath6nslt: persistentvolumeclaims "csi-hostpath6nslt" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct  7 16:37:14.250: INFO: Deleting PersistentVolumeClaim "csi-hostpath6nslt"
Oct  7 16:37:14.397: INFO: Waiting up to 5m0s for PersistentVolume pvc-e40d1f55-77f4-4340-9a19-0df78e60ad7d to get deleted
Oct  7 16:37:14.540: INFO: PersistentVolume pvc-e40d1f55-77f4-4340-9a19-0df78e60ad7d was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-1606
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:38.484: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 101 lines ...
STEP: creating execpod-noendpoints on node ip-172-20-43-90.sa-east-1.compute.internal
Oct  7 16:35:24.281: INFO: Creating new exec pod
Oct  7 16:35:32.715: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ip-172-20-43-90.sa-east-1.compute.internal
Oct  7 16:35:32.715: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct  7 16:35:47.953: INFO: rc: 1
Oct  7 16:35:47.953: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct  7 16:35:49.955: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct  7 16:35:54.446: INFO: rc: 1
Oct  7 16:35:54.447: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct  7 16:35:55.953: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct  7 16:36:00.436: INFO: rc: 1
Oct  7 16:36:00.436: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Oct  7 16:36:01.953: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct  7 16:36:48.498: INFO: rc: 1
Oct  7 16:36:48.498: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
DNS: lookup no-pods on 100.64.0.10:53: read udp 100.96.1.4:38721->100.64.0.10:53: i/o timeout
command terminated with exit code 1

error:
exit status 1
Oct  7 16:36:48.498: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct  7 16:37:30.059: INFO: rc: 1
Oct  7 16:37:30.059: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1343 exec execpod-noendpoints7m595 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
DNS: lookup no-pods on 100.64.0.10:53: read udp 100.96.1.4:43040->100.64.0.10:53: i/o timeout
command terminated with exit code 1

error:
exit status 1
Oct  7 16:37:30.059: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 239 lines ...
• Failure [136.311 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968

  Oct  7 16:37:30.059: Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2007
------------------------------
{"msg":"FAILED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:39.124: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
STEP: Destroying namespace "services-1672" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":11,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:40.986: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 105 lines ...
Oct  7 16:37:17.022: INFO: Creating new exec pod
Oct  7 16:37:26.599: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Oct  7 16:37:28.154: INFO: stderr: "+ + ncecho -v hostName\n -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Oct  7 16:37:28.154: INFO: stdout: "nodeport-test-lcmmb"
Oct  7 16:37:28.154: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80'
Oct  7 16:37:31.639: INFO: rc: 1
Oct  7 16:37:31.639: INFO: Service reachability failing with error: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.64.106.95 80
nc: connect to 100.64.106.95 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  7 16:37:32.639: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80'
Oct  7 16:37:34.233: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.106.95 80\nConnection to 100.64.106.95 80 port [tcp/http] succeeded!\n"
Oct  7 16:37:34.234: INFO: stdout: ""
Oct  7 16:37:34.639: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80'
Oct  7 16:37:38.119: INFO: rc: 1
Oct  7 16:37:38.119: INFO: Service reachability failing with error: error running /tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 100.64.106.95 80
nc: connect to 100.64.106.95 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  7 16:37:38.639: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.106.95 80'
Oct  7 16:37:40.250: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.106.95 80\nConnection to 100.64.106.95 80 port [tcp/http] succeeded!\n"
Oct  7 16:37:40.250: INFO: stdout: "nodeport-test-lcmmb"
Oct  7 16:37:40.250: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4131 exec execpodql284 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.56.61 31230'
... skipping 13 lines ...
• [SLOW TEST:33.673 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:43.566: INFO: Only supported for providers [vsphere] (not aws)
... skipping 101 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct  7 16:37:38.261: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2640" to be "Succeeded or Failed"
Oct  7 16:37:38.405: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 143.735719ms
Oct  7 16:37:40.550: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288269427s
Oct  7 16:37:42.695: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433963963s
Oct  7 16:37:44.841: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579918565s
Oct  7 16:37:44.841: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:44.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2640" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:45.296: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 199 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:45.580: INFO: Only supported for providers [vsphere] (not aws)
... skipping 91 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-ea8c4f89-7c49-4b9e-a8f5-7c86bb771bc3
STEP: Creating a pod to test consume secrets
Oct  7 16:37:39.541: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe" in namespace "projected-9778" to be "Succeeded or Failed"
Oct  7 16:37:39.687: INFO: Pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe": Phase="Pending", Reason="", readiness=false. Elapsed: 146.492196ms
Oct  7 16:37:41.831: INFO: Pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290000482s
Oct  7 16:37:43.974: INFO: Pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433751876s
Oct  7 16:37:46.118: INFO: Pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577743392s
STEP: Saw pod success
Oct  7 16:37:46.118: INFO: Pod "pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe" satisfied condition "Succeeded or Failed"
Oct  7 16:37:46.262: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:37:46.555: INFO: Waiting for pod pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe to disappear
Oct  7 16:37:46.697: INFO: Pod pod-projected-secrets-8e1230b4-5e86-4abe-b16d-72398bb502fe no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.452 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:46.998: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:12.150 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":6,"skipped":22,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 252 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":6,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:50.032: INFO: Only supported for providers [azure] (not aws)
... skipping 22 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Oct  7 16:37:37.094: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204" in namespace "security-context-test-8534" to be "Succeeded or Failed"
Oct  7 16:37:37.243: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 148.86549ms
Oct  7 16:37:39.387: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292807659s
Oct  7 16:37:41.533: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438708083s
Oct  7 16:37:43.679: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584787988s
Oct  7 16:37:45.825: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 8.731214353s
Oct  7 16:37:47.970: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Pending", Reason="", readiness=false. Elapsed: 10.87599482s
Oct  7 16:37:50.114: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.020051845s
Oct  7 16:37:50.114: INFO: Pod "alpine-nnp-true-e18c5bd5-ae2d-4172-aa0f-d54d571b4204" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:50.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8534" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":44,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:37:27.932: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Oct  7 16:37:42.298: INFO: PersistentVolumeClaim pvc-l449v found but phase is Pending instead of Bound.
Oct  7 16:37:44.442: INFO: PersistentVolumeClaim pvc-l449v found and phase=Bound (10.862368581s)
Oct  7 16:37:44.442: INFO: Waiting up to 3m0s for PersistentVolume local-g4ccs to have phase Bound
Oct  7 16:37:44.585: INFO: PersistentVolume local-g4ccs found and phase=Bound (142.945198ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6qgx
STEP: Creating a pod to test exec-volume-test
Oct  7 16:37:45.015: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6qgx" in namespace "volume-2552" to be "Succeeded or Failed"
Oct  7 16:37:45.159: INFO: Pod "exec-volume-test-preprovisionedpv-6qgx": Phase="Pending", Reason="", readiness=false. Elapsed: 143.051979ms
Oct  7 16:37:47.303: INFO: Pod "exec-volume-test-preprovisionedpv-6qgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287600265s
STEP: Saw pod success
Oct  7 16:37:47.303: INFO: Pod "exec-volume-test-preprovisionedpv-6qgx" satisfied condition "Succeeded or Failed"
Oct  7 16:37:47.446: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-6qgx container exec-container-preprovisionedpv-6qgx: <nil>
STEP: delete the pod
Oct  7 16:37:47.737: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6qgx to disappear
Oct  7 16:37:47.882: INFO: Pod exec-volume-test-preprovisionedpv-6qgx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6qgx
Oct  7 16:37:47.882: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6qgx" in namespace "volume-2552"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:51.710: INFO: Only supported for providers [azure] (not aws)
... skipping 96 lines ...
Oct  7 16:37:17.038: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:17.988: INFO: Exec stderr: ""
Oct  7 16:37:20.429: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-839"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-839"/host; echo host > "/var/lib/kubelet/mount-propagation-839"/host/file] Namespace:mount-propagation-839 PodName:hostexec-ip-172-20-42-249.sa-east-1.compute.internal-d4zv9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  7 16:37:20.429: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:21.537: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-839 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:21.537: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:22.487: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Oct  7 16:37:22.631: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-839 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:22.631: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:23.574: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:23.719: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-839 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:23.719: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:24.682: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:24.826: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-839 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:24.826: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:25.852: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:25.999: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-839 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:25.999: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:26.993: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Oct  7 16:37:27.137: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-839 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:27.138: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:28.067: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Oct  7 16:37:28.212: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-839 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:28.212: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:29.176: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Oct  7 16:37:29.320: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-839 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:29.320: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:30.272: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:30.432: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-839 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:30.432: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:31.358: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:31.502: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-839 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:31.502: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:32.520: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Oct  7 16:37:32.664: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-839 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:32.664: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:33.617: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:33.761: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-839 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:33.761: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:34.717: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:34.861: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-839 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:34.861: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:35.815: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Oct  7 16:37:35.960: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-839 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:35.960: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:36.897: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:37.042: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-839 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:37.042: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:38.023: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:38.168: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-839 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:38.168: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:39.120: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:39.265: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-839 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:39.265: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:40.259: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:40.403: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-839 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:40.403: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:41.346: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:41.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-839 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:41.491: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:42.447: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Oct  7 16:37:42.591: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-839 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  7 16:37:42.591: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:43.533: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct  7 16:37:43.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-839"/master/file` = master] Namespace:mount-propagation-839 PodName:hostexec-ip-172-20-42-249.sa-east-1.compute.internal-d4zv9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  7 16:37:43.533: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:44.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-839"/slave/file] Namespace:mount-propagation-839 PodName:hostexec-ip-172-20-42-249.sa-east-1.compute.internal-d4zv9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  7 16:37:44.473: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:37:45.471: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-839"/host] Namespace:mount-propagation-839 PodName:hostexec-ip-172-20-42-249.sa-east-1.compute.internal-d4zv9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct  7 16:37:45.471: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:77.215 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:52.348: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Oct  7 16:37:42.222: INFO: PersistentVolumeClaim pvc-gqtf6 found but phase is Pending instead of Bound.
Oct  7 16:37:44.366: INFO: PersistentVolumeClaim pvc-gqtf6 found and phase=Bound (8.722601084s)
Oct  7 16:37:44.366: INFO: Waiting up to 3m0s for PersistentVolume local-hm7p4 to have phase Bound
Oct  7 16:37:44.509: INFO: PersistentVolume local-hm7p4 found and phase=Bound (143.385968ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-r8bx
STEP: Creating a pod to test subpath
Oct  7 16:37:44.941: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r8bx" in namespace "provisioning-1865" to be "Succeeded or Failed"
Oct  7 16:37:45.086: INFO: Pod "pod-subpath-test-preprovisionedpv-r8bx": Phase="Pending", Reason="", readiness=false. Elapsed: 144.565399ms
Oct  7 16:37:47.231: INFO: Pod "pod-subpath-test-preprovisionedpv-r8bx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289342735s
Oct  7 16:37:49.388: INFO: Pod "pod-subpath-test-preprovisionedpv-r8bx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44680143s
Oct  7 16:37:51.534: INFO: Pod "pod-subpath-test-preprovisionedpv-r8bx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.59188315s
STEP: Saw pod success
Oct  7 16:37:51.534: INFO: Pod "pod-subpath-test-preprovisionedpv-r8bx" satisfied condition "Succeeded or Failed"
Oct  7 16:37:51.678: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-r8bx container test-container-volume-preprovisionedpv-r8bx: <nil>
STEP: delete the pod
Oct  7 16:37:51.979: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r8bx to disappear
Oct  7 16:37:52.125: INFO: Pod pod-subpath-test-preprovisionedpv-r8bx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-r8bx
Oct  7 16:37:52.125: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-r8bx" in namespace "provisioning-1865"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:54.148: INFO: Only supported for providers [azure] (not aws)
... skipping 40 lines ...
• [SLOW TEST:75.534 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:37:54.437: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":2,"skipped":23,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:37:54.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct  7 16:37:55.425: INFO: found topology map[topology.kubernetes.io/zone:sa-east-1a]
Oct  7 16:37:55.425: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct  7 16:37:55.425: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 50 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-64907355-205c-4b91-9858-c569130d1ea4
STEP: Creating a pod to test consume secrets
Oct  7 16:37:56.962: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362" in namespace "projected-3780" to be "Succeeded or Failed"
Oct  7 16:37:57.105: INFO: Pod "pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362": Phase="Pending", Reason="", readiness=false. Elapsed: 143.076538ms
Oct  7 16:37:59.249: INFO: Pod "pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286835376s
STEP: Saw pod success
Oct  7 16:37:59.249: INFO: Pod "pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362" satisfied condition "Succeeded or Failed"
Oct  7 16:37:59.392: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:37:59.696: INFO: Waiting for pod pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362 to disappear
Oct  7 16:37:59.840: INFO: Pod pod-projected-secrets-b7a5f35f-3ff4-41c1-a546-4a168a2be362 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:37:59.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3780" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":37,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:00.144: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
• [SLOW TEST:14.573 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:00.266: INFO: Only supported for providers [vsphere] (not aws)
... skipping 83 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:37:47.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:00.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2626" for this suite.


• [SLOW TEST:13.436 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:37:51.721: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
• [SLOW TEST:8.806 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:00.549: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-acdffaed-db27-4d91-a222-7e7b81583ac3
STEP: Creating a pod to test consume configMaps
Oct  7 16:37:53.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a" in namespace "configmap-1009" to be "Succeeded or Failed"
Oct  7 16:37:53.522: INFO: Pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a": Phase="Pending", Reason="", readiness=false. Elapsed: 144.5649ms
Oct  7 16:37:55.667: INFO: Pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290033186s
Oct  7 16:37:57.813: INFO: Pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435507634s
Oct  7 16:37:59.958: INFO: Pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.58098842s
STEP: Saw pod success
Oct  7 16:37:59.958: INFO: Pod "pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a" satisfied condition "Succeeded or Failed"
Oct  7 16:38:00.103: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:38:00.398: INFO: Waiting for pod pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a to disappear
Oct  7 16:38:00.542: INFO: Pod pod-configmaps-2e3d9ba6-6288-4372-88a4-529cec13d59a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.469 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 125 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:03.478: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
Oct  7 16:37:56.572: INFO: PersistentVolumeClaim pvc-d7vbq found but phase is Pending instead of Bound.
Oct  7 16:37:58.717: INFO: PersistentVolumeClaim pvc-d7vbq found and phase=Bound (6.587540494s)
Oct  7 16:37:58.717: INFO: Waiting up to 3m0s for PersistentVolume local-rw4h2 to have phase Bound
Oct  7 16:37:58.860: INFO: PersistentVolume local-rw4h2 found and phase=Bound (143.222188ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-xzxb
STEP: Creating a pod to test exec-volume-test
Oct  7 16:37:59.297: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xzxb" in namespace "volume-2343" to be "Succeeded or Failed"
Oct  7 16:37:59.442: INFO: Pod "exec-volume-test-preprovisionedpv-xzxb": Phase="Pending", Reason="", readiness=false. Elapsed: 145.208068ms
Oct  7 16:38:01.587: INFO: Pod "exec-volume-test-preprovisionedpv-xzxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.290154913s
STEP: Saw pod success
Oct  7 16:38:01.587: INFO: Pod "exec-volume-test-preprovisionedpv-xzxb" satisfied condition "Succeeded or Failed"
Oct  7 16:38:01.730: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-xzxb container exec-container-preprovisionedpv-xzxb: <nil>
STEP: delete the pod
Oct  7 16:38:02.025: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xzxb to disappear
Oct  7 16:38:02.179: INFO: Pod exec-volume-test-preprovisionedpv-xzxb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-xzxb
Oct  7 16:38:02.179: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xzxb" in namespace "volume-2343"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:38:04.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb" in namespace "projected-1886" to be "Succeeded or Failed"
Oct  7 16:38:04.511: INFO: Pod "downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.581728ms
Oct  7 16:38:06.656: INFO: Pod "downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288913244s
STEP: Saw pod success
Oct  7 16:38:06.656: INFO: Pod "downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb" satisfied condition "Succeeded or Failed"
Oct  7 16:38:06.801: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb container client-container: <nil>
STEP: delete the pod
Oct  7 16:38:07.107: INFO: Waiting for pod downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb to disappear
Oct  7 16:38:07.250: INFO: Pod downwardapi-volume-051c6224-9976-4db4-81ed-e2d4bbca24cb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1886" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:07.582: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Oct  7 16:38:00.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct  7 16:38:01.043: INFO: Waiting up to 5m0s for pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52" in namespace "emptydir-6486" to be "Succeeded or Failed"
Oct  7 16:38:01.187: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52": Phase="Pending", Reason="", readiness=false. Elapsed: 143.139288ms
Oct  7 16:38:03.331: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287327774s
Oct  7 16:38:05.477: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4330469s
Oct  7 16:38:07.620: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576331745s
Oct  7 16:38:09.764: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.720813721s
STEP: Saw pod success
Oct  7 16:38:09.764: INFO: Pod "pod-2274d9d2-8a0a-412a-a42f-10ce47550a52" satisfied condition "Succeeded or Failed"
Oct  7 16:38:09.908: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-2274d9d2-8a0a-412a-a42f-10ce47550a52 container test-container: <nil>
STEP: delete the pod
Oct  7 16:38:10.207: INFO: Waiting for pod pod-2274d9d2-8a0a-412a-a42f-10ce47550a52 to disappear
Oct  7 16:38:10.351: INFO: Pod pod-2274d9d2-8a0a-412a-a42f-10ce47550a52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 80 lines ...
Oct  7 16:37:14.762: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathj494n] to have phase Bound
Oct  7 16:37:14.905: INFO: PersistentVolumeClaim csi-hostpathj494n found but phase is Pending instead of Bound.
Oct  7 16:37:17.048: INFO: PersistentVolumeClaim csi-hostpathj494n found but phase is Pending instead of Bound.
Oct  7 16:37:19.191: INFO: PersistentVolumeClaim csi-hostpathj494n found and phase=Bound (4.429201924s)
STEP: Creating pod pod-subpath-test-dynamicpv-ghdp
STEP: Creating a pod to test subpath
Oct  7 16:37:19.628: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ghdp" in namespace "provisioning-3382" to be "Succeeded or Failed"
Oct  7 16:37:19.774: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 145.781179ms
Oct  7 16:37:21.917: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288732387s
Oct  7 16:37:24.061: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432994166s
Oct  7 16:37:26.208: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.580293485s
Oct  7 16:37:28.353: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724699808s
Oct  7 16:37:30.497: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.868739924s
Oct  7 16:37:32.642: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013643492s
Oct  7 16:37:34.789: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.160447001s
STEP: Saw pod success
Oct  7 16:37:34.789: INFO: Pod "pod-subpath-test-dynamicpv-ghdp" satisfied condition "Succeeded or Failed"
Oct  7 16:37:34.932: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-ghdp container test-container-subpath-dynamicpv-ghdp: <nil>
STEP: delete the pod
Oct  7 16:37:35.227: INFO: Waiting for pod pod-subpath-test-dynamicpv-ghdp to disappear
Oct  7 16:37:35.369: INFO: Pod pod-subpath-test-dynamicpv-ghdp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ghdp
Oct  7 16:37:35.369: INFO: Deleting pod "pod-subpath-test-dynamicpv-ghdp" in namespace "provisioning-3382"
STEP: Creating pod pod-subpath-test-dynamicpv-ghdp
STEP: Creating a pod to test subpath
Oct  7 16:37:35.657: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ghdp" in namespace "provisioning-3382" to be "Succeeded or Failed"
Oct  7 16:37:35.800: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 142.817438ms
Oct  7 16:37:37.944: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286836535s
Oct  7 16:37:40.088: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431215343s
Oct  7 16:37:42.232: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57484479s
Oct  7 16:37:44.377: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719765243s
Oct  7 16:37:46.521: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86421811s
Oct  7 16:37:48.673: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 13.015894605s
Oct  7 16:37:50.816: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.159400512s
Oct  7 16:37:52.959: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Pending", Reason="", readiness=false. Elapsed: 17.30211616s
Oct  7 16:37:55.103: INFO: Pod "pod-subpath-test-dynamicpv-ghdp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.445648886s
STEP: Saw pod success
Oct  7 16:37:55.103: INFO: Pod "pod-subpath-test-dynamicpv-ghdp" satisfied condition "Succeeded or Failed"
Oct  7 16:37:55.246: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-ghdp container test-container-subpath-dynamicpv-ghdp: <nil>
STEP: delete the pod
Oct  7 16:37:55.540: INFO: Waiting for pod pod-subpath-test-dynamicpv-ghdp to disappear
Oct  7 16:37:55.682: INFO: Pod pod-subpath-test-dynamicpv-ghdp no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ghdp
Oct  7 16:37:55.683: INFO: Deleting pod "pod-subpath-test-dynamicpv-ghdp" in namespace "provisioning-3382"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:20.130: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 203 lines ...
Oct  7 16:35:46.089: INFO: stderr: ""
Oct  7 16:35:46.089: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Oct  7 16:35:46.089: INFO: Waiting for all frontend pods to be Running.
Oct  7 16:35:51.240: INFO: Waiting for frontend to serve content.
Oct  7 16:36:01.392: INFO: Trying to add a new entry to the guestbook.
Oct  7 16:36:06.543: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:36:16.692: INFO: Verifying that added entry can be retrieved.
Oct  7 16:36:21.840: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
Oct  7 16:36:31.988: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
Oct  7 16:36:37.148: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
Oct  7 16:36:42.296: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
Oct  7 16:36:52.442: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:37:02.592: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:37:17.742: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:37:22.891: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
Oct  7 16:37:38.047: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:37:48.195: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:37:53.345: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Oct  7 16:38:08.499: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
STEP: using delete to clean up resources
Oct  7 16:38:18.648: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-945 delete --grace-period=0 --force -f -'
Oct  7 16:38:19.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct  7 16:38:19.340: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Oct  7 16:38:19.340: INFO: Running '/tmp/kubectl62913309/kubectl --server=https://api.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-945 delete --grace-period=0 --force -f -'
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:19.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Oct  7 16:38:20.097: INFO: Waiting up to 5m0s for pod "var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632" in namespace "var-expansion-9082" to be "Succeeded or Failed"
Oct  7 16:38:20.241: INFO: Pod "var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632": Phase="Pending", Reason="", readiness=false. Elapsed: 144.398909ms
Oct  7 16:38:22.387: INFO: Pod "var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290606765s
Oct  7 16:38:24.532: INFO: Pod "var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435183742s
STEP: Saw pod success
Oct  7 16:38:24.532: INFO: Pod "var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632" satisfied condition "Succeeded or Failed"
Oct  7 16:38:24.676: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632 container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:38:24.977: INFO: Waiting for pod var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632 to disappear
Oct  7 16:38:25.121: INFO: Pod var-expansion-affab83f-6ad3-47a7-90f3-4b464107c632 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.182 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 32 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-0e229911-30e7-4c2c-8cc5-8dd3692b397d
STEP: Creating a pod to test consume secrets
Oct  7 16:38:21.187: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6" in namespace "projected-4313" to be "Succeeded or Failed"
Oct  7 16:38:21.330: INFO: Pod "pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6": Phase="Pending", Reason="", readiness=false. Elapsed: 142.996239ms
Oct  7 16:38:23.473: INFO: Pod "pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286752854s
Oct  7 16:38:25.617: INFO: Pod "pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430855218s
STEP: Saw pod success
Oct  7 16:38:25.618: INFO: Pod "pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6" satisfied condition "Succeeded or Failed"
Oct  7 16:38:25.760: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:38:26.052: INFO: Waiting for pod pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6 to disappear
Oct  7 16:38:26.194: INFO: Pod pod-projected-secrets-6e0b5b7b-7e16-44be-9863-c17afe4722d6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.303 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:26.507: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Oct  7 16:37:54.913: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-6079w595v
STEP: creating a claim
Oct  7 16:37:55.059: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Oct  7 16:37:55.348: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  7 16:37:55.642: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:37:57.932: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:37:59.934: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:01.930: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:03.932: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:05.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:07.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:09.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:11.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:13.936: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:15.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:17.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:19.932: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:21.931: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:23.930: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:25.936: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6079w595v",
  	... // 2 identical fields
  }

Oct  7 16:38:26.237: INFO: Error updating pvc awsksv72: PersistentVolumeClaim "awsksv72" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:26.977: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 103 lines ...
Oct  7 16:38:13.468: INFO: PersistentVolumeClaim pvc-smmhd found but phase is Pending instead of Bound.
Oct  7 16:38:15.612: INFO: PersistentVolumeClaim pvc-smmhd found and phase=Bound (4.429190677s)
Oct  7 16:38:15.612: INFO: Waiting up to 3m0s for PersistentVolume local-77lxz to have phase Bound
Oct  7 16:38:15.757: INFO: PersistentVolume local-77lxz found and phase=Bound (144.821459ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tgvh
STEP: Creating a pod to test subpath
Oct  7 16:38:16.190: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tgvh" in namespace "provisioning-4954" to be "Succeeded or Failed"
Oct  7 16:38:16.334: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh": Phase="Pending", Reason="", readiness=false. Elapsed: 143.827798ms
Oct  7 16:38:18.478: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28821354s
Oct  7 16:38:20.622: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432361668s
Oct  7 16:38:22.766: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576167805s
Oct  7 16:38:24.910: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.71952312s
STEP: Saw pod success
Oct  7 16:38:24.910: INFO: Pod "pod-subpath-test-preprovisionedpv-tgvh" satisfied condition "Succeeded or Failed"
Oct  7 16:38:25.052: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-tgvh container test-container-subpath-preprovisionedpv-tgvh: <nil>
STEP: delete the pod
Oct  7 16:38:25.356: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tgvh to disappear
Oct  7 16:38:25.499: INFO: Pod pod-subpath-test-preprovisionedpv-tgvh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tgvh
Oct  7 16:38:25.499: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tgvh" in namespace "provisioning-4954"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":61,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:01.549: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Oct  7 16:38:13.230: INFO: PersistentVolumeClaim pvc-tglqf found but phase is Pending instead of Bound.
Oct  7 16:38:15.378: INFO: PersistentVolumeClaim pvc-tglqf found and phase=Bound (2.293777993s)
Oct  7 16:38:15.379: INFO: Waiting up to 3m0s for PersistentVolume local-k66d5 to have phase Bound
Oct  7 16:38:15.522: INFO: PersistentVolume local-k66d5 found and phase=Bound (143.426188ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-647h
STEP: Creating a pod to test subpath
Oct  7 16:38:15.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-647h" in namespace "provisioning-8128" to be "Succeeded or Failed"
Oct  7 16:38:16.105: INFO: Pod "pod-subpath-test-preprovisionedpv-647h": Phase="Pending", Reason="", readiness=false. Elapsed: 143.321899ms
Oct  7 16:38:18.251: INFO: Pod "pod-subpath-test-preprovisionedpv-647h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289254312s
Oct  7 16:38:20.396: INFO: Pod "pod-subpath-test-preprovisionedpv-647h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434438669s
Oct  7 16:38:22.540: INFO: Pod "pod-subpath-test-preprovisionedpv-647h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578475865s
Oct  7 16:38:24.685: INFO: Pod "pod-subpath-test-preprovisionedpv-647h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722692012s
STEP: Saw pod success
Oct  7 16:38:24.685: INFO: Pod "pod-subpath-test-preprovisionedpv-647h" satisfied condition "Succeeded or Failed"
Oct  7 16:38:24.828: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-647h container test-container-subpath-preprovisionedpv-647h: <nil>
STEP: delete the pod
Oct  7 16:38:25.128: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-647h to disappear
Oct  7 16:38:25.272: INFO: Pod pod-subpath-test-preprovisionedpv-647h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-647h
Oct  7 16:38:25.272: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-647h" in namespace "provisioning-8128"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:30.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3989" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":8,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:30.678: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":8,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:00.455: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Oct  7 16:38:01.171: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct  7 16:38:02.148: INFO: Successfully created a new PD: "aws://sa-east-1a/vol-0c367755a5cbb39a0".
Oct  7 16:38:02.148: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-c9nq
STEP: Creating a pod to test exec-volume-test
Oct  7 16:38:02.296: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-c9nq" in namespace "volume-8185" to be "Succeeded or Failed"
Oct  7 16:38:02.439: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Pending", Reason="", readiness=false. Elapsed: 142.983838ms
Oct  7 16:38:04.583: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286727775s
Oct  7 16:38:06.727: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43125933s
Oct  7 16:38:08.871: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575394803s
Oct  7 16:38:11.016: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720171539s
Oct  7 16:38:13.160: INFO: Pod "exec-volume-test-inlinevolume-c9nq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864669685s
STEP: Saw pod success
Oct  7 16:38:13.161: INFO: Pod "exec-volume-test-inlinevolume-c9nq" satisfied condition "Succeeded or Failed"
Oct  7 16:38:13.304: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod exec-volume-test-inlinevolume-c9nq container exec-container-inlinevolume-c9nq: <nil>
STEP: delete the pod
Oct  7 16:38:13.598: INFO: Waiting for pod exec-volume-test-inlinevolume-c9nq to disappear
Oct  7 16:38:13.743: INFO: Pod exec-volume-test-inlinevolume-c9nq no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-c9nq
Oct  7 16:38:13.743: INFO: Deleting pod "exec-volume-test-inlinevolume-c9nq" in namespace "volume-8185"
Oct  7 16:38:14.165: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0c367755a5cbb39a0", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c367755a5cbb39a0 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: 869210d3-bd90-4280-a575-f693e5c91666
Oct  7 16:38:19.895: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0c367755a5cbb39a0", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c367755a5cbb39a0 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: beaff149-11d2-4b41-9f71-674959ef3793
Oct  7 16:38:25.630: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0c367755a5cbb39a0", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c367755a5cbb39a0 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: de5d699b-877a-46d2-924f-19b9f17e65d7
Oct  7 16:38:31.396: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0c367755a5cbb39a0".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:31.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8185" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:31.716: INFO: Only supported for providers [azure] (not aws)
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:36.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2030" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":10,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:13.512 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:36.811: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
Oct  7 16:37:55.205: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct  7 16:37:56.182: INFO: Successfully created a new PD: "aws://sa-east-1a/vol-0f3218c6173aedb35".
Oct  7 16:37:56.182: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-6p24
STEP: Creating a pod to test exec-volume-test
Oct  7 16:37:56.328: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-6p24" in namespace "volume-1867" to be "Succeeded or Failed"
Oct  7 16:37:56.471: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Pending", Reason="", readiness=false. Elapsed: 143.449649ms
Oct  7 16:37:58.616: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288000918s
Oct  7 16:38:00.760: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432491871s
Oct  7 16:38:02.904: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576404427s
Oct  7 16:38:05.049: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721004965s
Oct  7 16:38:07.192: INFO: Pod "exec-volume-test-inlinevolume-6p24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864419178s
STEP: Saw pod success
Oct  7 16:38:07.192: INFO: Pod "exec-volume-test-inlinevolume-6p24" satisfied condition "Succeeded or Failed"
Oct  7 16:38:07.335: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod exec-volume-test-inlinevolume-6p24 container exec-container-inlinevolume-6p24: <nil>
STEP: delete the pod
Oct  7 16:38:07.631: INFO: Waiting for pod exec-volume-test-inlinevolume-6p24 to disappear
Oct  7 16:38:07.774: INFO: Pod exec-volume-test-inlinevolume-6p24 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-6p24
Oct  7 16:38:07.774: INFO: Deleting pod "exec-volume-test-inlinevolume-6p24" in namespace "volume-1867"
Oct  7 16:38:08.171: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f3218c6173aedb35", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f3218c6173aedb35 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: c32f65ac-50b1-47ff-b328-722979cde817
Oct  7 16:38:13.902: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f3218c6173aedb35", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f3218c6173aedb35 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: 2d4e8cab-25a7-4d4a-bc2a-e5e85f5069f2
Oct  7 16:38:19.736: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f3218c6173aedb35", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f3218c6173aedb35 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: 6a20d691-f5cf-466d-8292-6147102bcbbf
Oct  7 16:38:25.456: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f3218c6173aedb35", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f3218c6173aedb35 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: a2a9a036-c736-454d-b142-d3f9ac68bb3f
Oct  7 16:38:31.209: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0f3218c6173aedb35", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0f3218c6173aedb35 is currently attached to i-012d8f42ddb99cc8a
	status code: 400, request id: 6ae0b565-4ddf-4514-90e2-c4c723198aee
Oct  7 16:38:37.014: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0f3218c6173aedb35".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:37.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1867" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:37.315: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  7 16:38:28.500: INFO: File wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:28.644: INFO: File jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:28.644: INFO: Lookups using dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 failed for: [wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local]

Oct  7 16:38:33.788: INFO: File wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:33.933: INFO: File jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:33.933: INFO: Lookups using dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 failed for: [wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local]

Oct  7 16:38:38.788: INFO: File wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:38.933: INFO: File jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local from pod  dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct  7 16:38:38.933: INFO: Lookups using dns-6187/dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 failed for: [wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local jessie_udp@dns-test-service-3.dns-6187.svc.cluster.local]

Oct  7 16:38:43.933: INFO: DNS probes using dns-test-2db71da5-ae6c-4e81-a2bc-08faf4cf20f0 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6187.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6187.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:40.395 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:48.003: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
• [SLOW TEST:63.610 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":7,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct  7 16:38:41.951: INFO: PersistentVolumeClaim pvc-8l25m found but phase is Pending instead of Bound.
Oct  7 16:38:44.094: INFO: PersistentVolumeClaim pvc-8l25m found and phase=Bound (13.006470493s)
Oct  7 16:38:44.094: INFO: Waiting up to 3m0s for PersistentVolume local-spznt to have phase Bound
Oct  7 16:38:44.236: INFO: PersistentVolume local-spznt found and phase=Bound (142.377238ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gt2t
STEP: Creating a pod to test subpath
Oct  7 16:38:44.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gt2t" in namespace "provisioning-7569" to be "Succeeded or Failed"
Oct  7 16:38:44.808: INFO: Pod "pod-subpath-test-preprovisionedpv-gt2t": Phase="Pending", Reason="", readiness=false. Elapsed: 142.63699ms
Oct  7 16:38:46.952: INFO: Pod "pod-subpath-test-preprovisionedpv-gt2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286309795s
STEP: Saw pod success
Oct  7 16:38:46.952: INFO: Pod "pod-subpath-test-preprovisionedpv-gt2t" satisfied condition "Succeeded or Failed"
Oct  7 16:38:47.095: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-gt2t container test-container-subpath-preprovisionedpv-gt2t: <nil>
STEP: delete the pod
Oct  7 16:38:47.393: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gt2t to disappear
Oct  7 16:38:47.538: INFO: Pod pod-subpath-test-preprovisionedpv-gt2t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gt2t
Oct  7 16:38:47.538: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gt2t" in namespace "provisioning-7569"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":67,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":7,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:80.068 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":4,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:50.132: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":11,"skipped":58,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:52.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  7 16:38:53.346: INFO: Waiting up to 5m0s for pod "pod-1873dfba-f816-49ba-9818-3be30053cd55" in namespace "emptydir-1350" to be "Succeeded or Failed"
Oct  7 16:38:53.489: INFO: Pod "pod-1873dfba-f816-49ba-9818-3be30053cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 143.241558ms
Oct  7 16:38:55.633: INFO: Pod "pod-1873dfba-f816-49ba-9818-3be30053cd55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286891588s
STEP: Saw pod success
Oct  7 16:38:55.633: INFO: Pod "pod-1873dfba-f816-49ba-9818-3be30053cd55" satisfied condition "Succeeded or Failed"
Oct  7 16:38:55.776: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-1873dfba-f816-49ba-9818-3be30053cd55 container test-container: <nil>
STEP: delete the pod
Oct  7 16:38:56.068: INFO: Waiting for pod pod-1873dfba-f816-49ba-9818-3be30053cd55 to disappear
Oct  7 16:38:56.211: INFO: Pod pod-1873dfba-f816-49ba-9818-3be30053cd55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:38:56.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1350" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:38:56.569: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":42,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:10.660: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Oct  7 16:38:27.089: INFO: PersistentVolumeClaim pvc-g8rnh found but phase is Pending instead of Bound.
Oct  7 16:38:29.235: INFO: PersistentVolumeClaim pvc-g8rnh found and phase=Bound (13.025006373s)
Oct  7 16:38:29.235: INFO: Waiting up to 3m0s for PersistentVolume local-hq8r8 to have phase Bound
Oct  7 16:38:29.382: INFO: PersistentVolume local-hq8r8 found and phase=Bound (146.718759ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-46pd
STEP: Creating a pod to test atomic-volume-subpath
Oct  7 16:38:29.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-46pd" in namespace "provisioning-1785" to be "Succeeded or Failed"
Oct  7 16:38:29.963: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Pending", Reason="", readiness=false. Elapsed: 145.951999ms
Oct  7 16:38:32.107: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290290453s
Oct  7 16:38:34.251: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 4.434469364s
Oct  7 16:38:36.399: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 6.582160166s
Oct  7 16:38:38.544: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 8.727113845s
Oct  7 16:38:40.688: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 10.871363767s
Oct  7 16:38:42.832: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 13.015628772s
Oct  7 16:38:44.977: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 15.160095261s
Oct  7 16:38:47.121: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 17.303950117s
Oct  7 16:38:49.264: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 19.447727873s
Oct  7 16:38:51.409: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Running", Reason="", readiness=true. Elapsed: 21.592178987s
Oct  7 16:38:53.554: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.737060078s
STEP: Saw pod success
Oct  7 16:38:53.554: INFO: Pod "pod-subpath-test-preprovisionedpv-46pd" satisfied condition "Succeeded or Failed"
Oct  7 16:38:53.697: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-46pd container test-container-subpath-preprovisionedpv-46pd: <nil>
STEP: delete the pod
Oct  7 16:38:53.992: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-46pd to disappear
Oct  7 16:38:54.135: INFO: Pod pod-subpath-test-preprovisionedpv-46pd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-46pd
Oct  7 16:38:54.135: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-46pd" in namespace "provisioning-1785"
... skipping 53 lines ...
Oct  7 16:38:42.503: INFO: PersistentVolumeClaim pvc-nrhxj found but phase is Pending instead of Bound.
Oct  7 16:38:44.648: INFO: PersistentVolumeClaim pvc-nrhxj found and phase=Bound (10.870328974s)
Oct  7 16:38:44.648: INFO: Waiting up to 3m0s for PersistentVolume local-mk9bn to have phase Bound
Oct  7 16:38:44.791: INFO: PersistentVolume local-mk9bn found and phase=Bound (143.32668ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9zdc
STEP: Creating a pod to test subpath
Oct  7 16:38:45.224: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zdc" in namespace "provisioning-1801" to be "Succeeded or Failed"
Oct  7 16:38:45.368: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Pending", Reason="", readiness=false. Elapsed: 144.228328ms
Oct  7 16:38:47.513: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288698296s
Oct  7 16:38:49.659: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434942432s
Oct  7 16:38:51.804: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580263505s
STEP: Saw pod success
Oct  7 16:38:51.804: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc" satisfied condition "Succeeded or Failed"
Oct  7 16:38:51.948: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9zdc container test-container-subpath-preprovisionedpv-9zdc: <nil>
STEP: delete the pod
Oct  7 16:38:52.245: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zdc to disappear
Oct  7 16:38:52.392: INFO: Pod pod-subpath-test-preprovisionedpv-9zdc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9zdc
Oct  7 16:38:52.392: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zdc" in namespace "provisioning-1801"
STEP: Creating pod pod-subpath-test-preprovisionedpv-9zdc
STEP: Creating a pod to test subpath
Oct  7 16:38:52.702: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zdc" in namespace "provisioning-1801" to be "Succeeded or Failed"
Oct  7 16:38:52.846: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.68982ms
Oct  7 16:38:54.991: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288236459s
STEP: Saw pod success
Oct  7 16:38:54.991: INFO: Pod "pod-subpath-test-preprovisionedpv-9zdc" satisfied condition "Succeeded or Failed"
Oct  7 16:38:55.142: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9zdc container test-container-subpath-preprovisionedpv-9zdc: <nil>
STEP: delete the pod
Oct  7 16:38:55.440: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zdc to disappear
Oct  7 16:38:55.584: INFO: Pod pod-subpath-test-preprovisionedpv-9zdc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9zdc
Oct  7 16:38:55.584: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zdc" in namespace "provisioning-1801"
... skipping 58 lines ...
• [SLOW TEST:9.694 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":11,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct  7 16:38:37.620: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  7 16:38:37.775: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-z6st
STEP: Creating a pod to test atomic-volume-subpath
Oct  7 16:38:37.925: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-z6st" in namespace "provisioning-2329" to be "Succeeded or Failed"
Oct  7 16:38:38.070: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Pending", Reason="", readiness=false. Elapsed: 145.096099ms
Oct  7 16:38:40.215: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290139891s
Oct  7 16:38:42.362: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 4.436748618s
Oct  7 16:38:44.507: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 6.581630803s
Oct  7 16:38:46.653: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 8.727766927s
Oct  7 16:38:48.797: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 10.872138007s
Oct  7 16:38:50.942: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 13.016900681s
Oct  7 16:38:53.091: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 15.165380164s
Oct  7 16:38:55.237: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 17.311493225s
Oct  7 16:38:57.382: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 19.456603729s
Oct  7 16:38:59.527: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Running", Reason="", readiness=true. Elapsed: 21.601868931s
Oct  7 16:39:01.672: INFO: Pod "pod-subpath-test-inlinevolume-z6st": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.746638985s
STEP: Saw pod success
Oct  7 16:39:01.672: INFO: Pod "pod-subpath-test-inlinevolume-z6st" satisfied condition "Succeeded or Failed"
Oct  7 16:39:01.816: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-z6st container test-container-subpath-inlinevolume-z6st: <nil>
STEP: delete the pod
Oct  7 16:39:02.117: INFO: Waiting for pod pod-subpath-test-inlinevolume-z6st to disappear
Oct  7 16:39:02.261: INFO: Pod pod-subpath-test-inlinevolume-z6st no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-z6st
Oct  7 16:39:02.261: INFO: Deleting pod "pod-subpath-test-inlinevolume-z6st" in namespace "provisioning-2329"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":64,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:03.051: INFO: Only supported for providers [openstack] (not aws)
... skipping 55 lines ...
Oct  7 16:38:31.406: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5070jbcb8
STEP: creating a claim
Oct  7 16:38:31.551: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zm6z
STEP: Creating a pod to test subpath
Oct  7 16:38:31.991: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zm6z" in namespace "provisioning-5070" to be "Succeeded or Failed"
Oct  7 16:38:32.134: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 143.421499ms
Oct  7 16:38:34.280: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288684178s
Oct  7 16:38:36.424: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433068441s
Oct  7 16:38:38.569: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578195701s
Oct  7 16:38:40.714: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723200292s
Oct  7 16:38:42.858: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866784756s
Oct  7 16:38:45.002: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Pending", Reason="", readiness=false. Elapsed: 13.010466556s
Oct  7 16:38:47.146: INFO: Pod "pod-subpath-test-dynamicpv-zm6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.155185001s
STEP: Saw pod success
Oct  7 16:38:47.146: INFO: Pod "pod-subpath-test-dynamicpv-zm6z" satisfied condition "Succeeded or Failed"
Oct  7 16:38:47.290: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-zm6z container test-container-subpath-dynamicpv-zm6z: <nil>
STEP: delete the pod
Oct  7 16:38:47.585: INFO: Waiting for pod pod-subpath-test-dynamicpv-zm6z to disappear
Oct  7 16:38:47.734: INFO: Pod pod-subpath-test-dynamicpv-zm6z no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zm6z
Oct  7 16:38:47.735: INFO: Deleting pod "pod-subpath-test-dynamicpv-zm6z" in namespace "provisioning-5070"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:04.491: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 160 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":5,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:09.501: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":9,"skipped":68,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:06.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:09.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6209" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":10,"skipped":68,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  7 16:39:05.380: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5" in namespace "security-context-test-9734" to be "Succeeded or Failed"
Oct  7 16:39:05.525: INFO: Pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 144.979838ms
Oct  7 16:39:07.670: INFO: Pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290084893s
Oct  7 16:39:09.814: INFO: Pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434167827s
Oct  7 16:39:11.963: INFO: Pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.582745349s
Oct  7 16:39:11.963: INFO: Pod "busybox-readonly-false-18b5f4b5-2b08-4193-b01d-d43746a1fdc5" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:11.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9734" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:12.271: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:14.136: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 262 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":7,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:12.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  7 16:39:13.151: INFO: Waiting up to 5m0s for pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a" in namespace "downward-api-8090" to be "Succeeded or Failed"
Oct  7 16:39:13.294: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a": Phase="Pending", Reason="", readiness=false. Elapsed: 143.331188ms
Oct  7 16:39:15.438: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287223779s
Oct  7 16:39:17.583: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432294899s
Oct  7 16:39:19.727: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576474324s
Oct  7 16:39:21.871: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.720854311s
STEP: Saw pod success
Oct  7 16:39:21.872: INFO: Pod "downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a" satisfied condition "Succeeded or Failed"
Oct  7 16:39:22.015: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:39:22.307: INFO: Waiting for pod downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a to disappear
Oct  7 16:39:22.450: INFO: Pod downward-api-2164b86f-a33a-4fd5-aff1-b0ba9ba6816a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.458 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:22.764: INFO: Only supported for providers [openstack] (not aws)
... skipping 90 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:545
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":11,"skipped":67,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:11.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
• [SLOW TEST:14.351 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Oct  7 16:39:12.703: INFO: PersistentVolumeClaim pvc-l8h8s found but phase is Pending instead of Bound.
Oct  7 16:39:14.846: INFO: PersistentVolumeClaim pvc-l8h8s found and phase=Bound (10.861953664s)
Oct  7 16:39:14.846: INFO: Waiting up to 3m0s for PersistentVolume local-jpsng to have phase Bound
Oct  7 16:39:14.989: INFO: PersistentVolume local-jpsng found and phase=Bound (142.53432ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vfgz
STEP: Creating a pod to test subpath
Oct  7 16:39:15.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vfgz" in namespace "provisioning-4720" to be "Succeeded or Failed"
Oct  7 16:39:15.562: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 142.911638ms
Oct  7 16:39:17.705: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285961428s
Oct  7 16:39:19.848: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428815383s
Oct  7 16:39:21.992: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573072779s
Oct  7 16:39:24.135: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.716294842s
STEP: Saw pod success
Oct  7 16:39:24.135: INFO: Pod "pod-subpath-test-preprovisionedpv-vfgz" satisfied condition "Succeeded or Failed"
Oct  7 16:39:24.278: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-vfgz container test-container-subpath-preprovisionedpv-vfgz: <nil>
STEP: delete the pod
Oct  7 16:39:24.569: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vfgz to disappear
Oct  7 16:39:24.712: INFO: Pod pod-subpath-test-preprovisionedpv-vfgz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vfgz
Oct  7 16:39:24.712: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vfgz" in namespace "provisioning-4720"
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:26.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-8274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":8,"skipped":51,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:58.551: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Oct  7 16:39:11.880: INFO: PersistentVolumeClaim pvc-wcd6j found but phase is Pending instead of Bound.
Oct  7 16:39:14.025: INFO: PersistentVolumeClaim pvc-wcd6j found and phase=Bound (10.872808652s)
Oct  7 16:39:14.025: INFO: Waiting up to 3m0s for PersistentVolume local-qm922 to have phase Bound
Oct  7 16:39:14.168: INFO: PersistentVolume local-qm922 found and phase=Bound (143.388799ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dc4l
STEP: Creating a pod to test subpath
Oct  7 16:39:14.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dc4l" in namespace "provisioning-1121" to be "Succeeded or Failed"
Oct  7 16:39:14.744: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 143.514679ms
Oct  7 16:39:16.889: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288198498s
Oct  7 16:39:19.033: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432153099s
Oct  7 16:39:21.177: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576159047s
Oct  7 16:39:23.339: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738568881s
Oct  7 16:39:25.484: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.8830491s
STEP: Saw pod success
Oct  7 16:39:25.484: INFO: Pod "pod-subpath-test-preprovisionedpv-dc4l" satisfied condition "Succeeded or Failed"
Oct  7 16:39:25.628: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-dc4l container test-container-volume-preprovisionedpv-dc4l: <nil>
STEP: delete the pod
Oct  7 16:39:25.928: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dc4l to disappear
Oct  7 16:39:26.071: INFO: Pod pod-subpath-test-preprovisionedpv-dc4l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dc4l
Oct  7 16:39:26.071: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dc4l" in namespace "provisioning-1121"
... skipping 47 lines ...
Oct  7 16:39:09.404: INFO: PersistentVolumeClaim pvc-5mw2k found and phase=Bound (143.854019ms)
Oct  7 16:39:09.404: INFO: Waiting up to 3m0s for PersistentVolume nfs-w4z4z to have phase Bound
Oct  7 16:39:09.549: INFO: PersistentVolume nfs-w4z4z found and phase=Bound (144.267458ms)
STEP: Checking pod has write access to PersistentVolume
Oct  7 16:39:09.840: INFO: Creating nfs test pod
Oct  7 16:39:09.985: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:39:09.985: INFO: Waiting up to 5m0s for pod "pvc-tester-qlzkh" in namespace "pv-5698" to be "Succeeded or Failed"
Oct  7 16:39:10.129: INFO: Pod "pvc-tester-qlzkh": Phase="Pending", Reason="", readiness=false. Elapsed: 143.849868ms
Oct  7 16:39:12.275: INFO: Pod "pvc-tester-qlzkh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2901835s
Oct  7 16:39:14.420: INFO: Pod "pvc-tester-qlzkh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434872761s
Oct  7 16:39:16.565: INFO: Pod "pvc-tester-qlzkh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579569132s
Oct  7 16:39:18.709: INFO: Pod "pvc-tester-qlzkh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.724198082s
STEP: Saw pod success
Oct  7 16:39:18.709: INFO: Pod "pvc-tester-qlzkh" satisfied condition "Succeeded or Failed"
Oct  7 16:39:18.709: INFO: Pod pvc-tester-qlzkh succeeded 
Oct  7 16:39:18.709: INFO: Deleting pod "pvc-tester-qlzkh" in namespace "pv-5698"
Oct  7 16:39:18.858: INFO: Wait up to 5m0s for pod "pvc-tester-qlzkh" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  7 16:39:19.002: INFO: Deleting PVC pvc-5mw2k to trigger reclamation of PV 
Oct  7 16:39:19.002: INFO: Deleting PersistentVolumeClaim "pvc-5mw2k"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":8,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:30.484: INFO: Only supported for providers [azure] (not aws)
... skipping 66 lines ...
• [SLOW TEST:6.455 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":12,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:30.847: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":11,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:35.209: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 120 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:35.901: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 43 lines ...
STEP: Creating a kubernetes client
Oct  7 16:39:26.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct  7 16:39:27.643: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:36.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4385" for this suite.


• [SLOW TEST:9.800 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:36.749: INFO: Only supported for providers [azure] (not aws)
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":29,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":9,"skipped":66,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:32.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-451" for this suite.


• [SLOW TEST:9.446 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:35:24.354: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
• [SLOW TEST:258.527 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:42.910: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:8.431 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:45.202: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":13,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":14,"skipped":89,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:48.014: INFO: Only supported for providers [openstack] (not aws)
... skipping 38 lines ...
• [SLOW TEST:6.837 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:52.079: INFO: Only supported for providers [gce gke] (not aws)
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:53.148: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 159 lines ...
• [SLOW TEST:19.052 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:55.030: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Oct  7 16:39:54.112: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-3dde2ea1-848c-4c61-8097-81e4ecf3c1fe" in namespace "security-context-test-9403" to be "Succeeded or Failed"
Oct  7 16:39:54.255: INFO: Pod "busybox-readonly-true-3dde2ea1-848c-4c61-8097-81e4ecf3c1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 143.255158ms
Oct  7 16:39:56.399: INFO: Pod "busybox-readonly-true-3dde2ea1-848c-4c61-8097-81e4ecf3c1fe": Phase="Failed", Reason="", readiness=false. Elapsed: 2.287283816s
Oct  7 16:39:56.399: INFO: Pod "busybox-readonly-true-3dde2ea1-848c-4c61-8097-81e4ecf3c1fe" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:39:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":13,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:56.707: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
• [SLOW TEST:11.711 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":14,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:39:58.768: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
• [SLOW TEST:84.039 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:00.692: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
• [SLOW TEST:13.049 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":12,"skipped":66,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:40:00.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  7 16:40:01.591: INFO: Waiting up to 5m0s for pod "security-context-07c9618a-6b4a-4408-a27e-bc875304b23f" in namespace "security-context-1095" to be "Succeeded or Failed"
Oct  7 16:40:01.735: INFO: Pod "security-context-07c9618a-6b4a-4408-a27e-bc875304b23f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.333669ms
Oct  7 16:40:03.880: INFO: Pod "security-context-07c9618a-6b4a-4408-a27e-bc875304b23f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289383737s
Oct  7 16:40:06.026: INFO: Pod "security-context-07c9618a-6b4a-4408-a27e-bc875304b23f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435374974s
STEP: Saw pod success
Oct  7 16:40:06.027: INFO: Pod "security-context-07c9618a-6b4a-4408-a27e-bc875304b23f" satisfied condition "Succeeded or Failed"
Oct  7 16:40:06.170: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod security-context-07c9618a-6b4a-4408-a27e-bc875304b23f container test-container: <nil>
STEP: delete the pod
Oct  7 16:40:06.463: INFO: Waiting for pod security-context-07c9618a-6b4a-4408-a27e-bc875304b23f to disappear
Oct  7 16:40:06.607: INFO: Pod security-context-07c9618a-6b4a-4408-a27e-bc875304b23f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.174 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:06.908: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:28.065: INFO: >>> kubeConfig: /root/.kube/config
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":9,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:07.809: INFO: Only supported for providers [azure] (not aws)
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":14,"skipped":97,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:70
------------------------------
SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":10,"skipped":66,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:42.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 53 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:40:09.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-1908" for this suite.

... skipping 23 lines ...
• [SLOW TEST:8.326 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":10,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:16.173: INFO: Only supported for providers [azure] (not aws)
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:40:17.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2476" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:40:17.501: INFO: >>> kubeConfig: /root/.kube/config
... skipping 135 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":8,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:18.527: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 250 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":7,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:18.907: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 258 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":12,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:25.343: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 113 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:26.538: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Oct  7 16:40:13.967: INFO: PersistentVolumeClaim pvc-snxjn found and phase=Bound (6.582349008s)
Oct  7 16:40:13.967: INFO: Waiting up to 3m0s for PersistentVolume nfs-8bdnk to have phase Bound
Oct  7 16:40:14.111: INFO: PersistentVolume nfs-8bdnk found and phase=Bound (143.713427ms)
STEP: Checking pod has write access to PersistentVolume
Oct  7 16:40:14.397: INFO: Creating nfs test pod
Oct  7 16:40:14.542: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:40:14.542: INFO: Waiting up to 5m0s for pod "pvc-tester-rjx7p" in namespace "pv-8116" to be "Succeeded or Failed"
Oct  7 16:40:14.685: INFO: Pod "pvc-tester-rjx7p": Phase="Pending", Reason="", readiness=false. Elapsed: 143.017998ms
Oct  7 16:40:16.830: INFO: Pod "pvc-tester-rjx7p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287743798s
STEP: Saw pod success
Oct  7 16:40:16.830: INFO: Pod "pvc-tester-rjx7p" satisfied condition "Succeeded or Failed"
Oct  7 16:40:16.830: INFO: Pod pvc-tester-rjx7p succeeded 
Oct  7 16:40:16.830: INFO: Deleting pod "pvc-tester-rjx7p" in namespace "pv-8116"
Oct  7 16:40:16.979: INFO: Wait up to 5m0s for pod "pvc-tester-rjx7p" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  7 16:40:17.122: INFO: Deleting PVC pvc-snxjn to trigger reclamation of PV nfs-8bdnk
Oct  7 16:40:17.122: INFO: Deleting PersistentVolumeClaim "pvc-snxjn"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":5,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:26.584: INFO: Only supported for providers [azure] (not aws)
... skipping 170 lines ...
Oct  7 16:40:12.008: INFO: PersistentVolumeClaim pvc-zlp2v found but phase is Pending instead of Bound.
Oct  7 16:40:14.154: INFO: PersistentVolumeClaim pvc-zlp2v found and phase=Bound (4.432841899s)
Oct  7 16:40:14.154: INFO: Waiting up to 3m0s for PersistentVolume local-hkf27 to have phase Bound
Oct  7 16:40:14.297: INFO: PersistentVolume local-hkf27 found and phase=Bound (143.122168ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zhr4
STEP: Creating a pod to test subpath
Oct  7 16:40:14.729: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zhr4" in namespace "provisioning-534" to be "Succeeded or Failed"
Oct  7 16:40:14.872: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4": Phase="Pending", Reason="", readiness=false. Elapsed: 142.843197ms
Oct  7 16:40:17.015: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286402089s
Oct  7 16:40:19.161: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431861481s
Oct  7 16:40:21.304: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575255787s
Oct  7 16:40:23.449: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.720245554s
STEP: Saw pod success
Oct  7 16:40:23.449: INFO: Pod "pod-subpath-test-preprovisionedpv-zhr4" satisfied condition "Succeeded or Failed"
Oct  7 16:40:23.592: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-zhr4 container test-container-subpath-preprovisionedpv-zhr4: <nil>
STEP: delete the pod
Oct  7 16:40:23.884: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zhr4 to disappear
Oct  7 16:40:24.028: INFO: Pod pod-subpath-test-preprovisionedpv-zhr4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zhr4
Oct  7 16:40:24.028: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zhr4" in namespace "provisioning-534"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":13,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:27.006: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":42,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:38:58.046: INFO: >>> kubeConfig: /root/.kube/config
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":42,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:27.307: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":111,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:28.713: INFO: Only supported for providers [vsphere] (not aws)
... skipping 116 lines ...
Oct  7 16:40:10.215: INFO: PersistentVolume nfs-cjfkh found and phase=Bound (142.723998ms)
Oct  7 16:40:10.360: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-g7c8j] to have phase Bound
Oct  7 16:40:10.503: INFO: PersistentVolumeClaim pvc-g7c8j found and phase=Bound (142.879688ms)
STEP: Checking pod has write access to PersistentVolumes
Oct  7 16:40:10.646: INFO: Creating nfs test pod
Oct  7 16:40:10.789: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:40:10.790: INFO: Waiting up to 5m0s for pod "pvc-tester-tc2kz" in namespace "pv-1052" to be "Succeeded or Failed"
Oct  7 16:40:10.933: INFO: Pod "pvc-tester-tc2kz": Phase="Pending", Reason="", readiness=false. Elapsed: 143.293449ms
Oct  7 16:40:13.076: INFO: Pod "pvc-tester-tc2kz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286825938s
STEP: Saw pod success
Oct  7 16:40:13.076: INFO: Pod "pvc-tester-tc2kz" satisfied condition "Succeeded or Failed"
Oct  7 16:40:13.076: INFO: Pod pvc-tester-tc2kz succeeded 
Oct  7 16:40:13.076: INFO: Deleting pod "pvc-tester-tc2kz" in namespace "pv-1052"
Oct  7 16:40:13.227: INFO: Wait up to 5m0s for pod "pvc-tester-tc2kz" to be fully deleted
Oct  7 16:40:13.513: INFO: Creating nfs test pod
Oct  7 16:40:13.657: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:40:13.657: INFO: Waiting up to 5m0s for pod "pvc-tester-v7d52" in namespace "pv-1052" to be "Succeeded or Failed"
Oct  7 16:40:13.800: INFO: Pod "pvc-tester-v7d52": Phase="Pending", Reason="", readiness=false. Elapsed: 142.821008ms
Oct  7 16:40:15.944: INFO: Pod "pvc-tester-v7d52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287273237s
STEP: Saw pod success
Oct  7 16:40:15.944: INFO: Pod "pvc-tester-v7d52" satisfied condition "Succeeded or Failed"
Oct  7 16:40:15.944: INFO: Pod pvc-tester-v7d52 succeeded 
Oct  7 16:40:15.944: INFO: Deleting pod "pvc-tester-v7d52" in namespace "pv-1052"
Oct  7 16:40:16.097: INFO: Wait up to 5m0s for pod "pvc-tester-v7d52" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Oct  7 16:40:16.830: INFO: Deleting PVC pvc-dn4hr to trigger reclamation of PV nfs-7znj5
Oct  7 16:40:16.830: INFO: Deleting PersistentVolumeClaim "pvc-dn4hr"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":15,"skipped":86,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:13.507 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":12,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":5,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:32.257: INFO: Only supported for providers [gce gke] (not aws)
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":17,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:40:30.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:10.189 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:40.638: INFO: Only supported for providers [gce gke] (not aws)
... skipping 130 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:40.672: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
• [SLOW TEST:16.965 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":14,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:44.017: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-53a39200-9207-44a7-a3c0-3d16f48d9cce
STEP: Creating a pod to test consume configMaps
Oct  7 16:40:28.417: INFO: Waiting up to 5m0s for pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720" in namespace "configmap-5184" to be "Succeeded or Failed"
Oct  7 16:40:28.560: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 143.171009ms
Oct  7 16:40:30.707: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289927659s
Oct  7 16:40:32.851: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434148368s
Oct  7 16:40:34.996: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579129439s
Oct  7 16:40:37.140: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722662246s
Oct  7 16:40:39.283: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 10.866412424s
Oct  7 16:40:41.428: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Pending", Reason="", readiness=false. Elapsed: 13.010915802s
Oct  7 16:40:43.572: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.155150203s
STEP: Saw pod success
Oct  7 16:40:43.572: INFO: Pod "pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720" satisfied condition "Succeeded or Failed"
Oct  7 16:40:43.716: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:40:44.008: INFO: Waiting for pod pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720 to disappear
Oct  7 16:40:44.151: INFO: Pod pod-configmaps-777e2000-65d0-48bb-942b-4977c5ff0720 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.041 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:44.452: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":62,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":9,"skipped":59,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":11,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:46.918: INFO: Only supported for providers [vsphere] (not aws)
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:40:48.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8161" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":14,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":16,"skipped":120,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:54.448: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:40:54.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":17,"skipped":125,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:54.908: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 51 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1007 16:35:56.352021    5518 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct  7 16:40:56.639: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:40:56.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7493" for this suite.


• [SLOW TEST:302.889 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":8,"skipped":65,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:40:56.967: INFO: Only supported for providers [azure] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Oct  7 16:40:49.336: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4112" to be "Succeeded or Failed"
Oct  7 16:40:49.480: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.226608ms
Oct  7 16:40:51.627: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290544715s
Oct  7 16:40:53.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434621591s
Oct  7 16:40:55.916: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579649905s
Oct  7 16:40:58.059: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723154145s
Oct  7 16:41:00.204: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.868169322s
STEP: Saw pod success
Oct  7 16:41:00.205: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  7 16:41:00.348: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Oct  7 16:41:00.643: INFO: Waiting for pod pod-host-path-test to disappear
Oct  7 16:41:00.787: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.608 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":15,"skipped":67,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:01.125: INFO: Only supported for providers [azure] (not aws)
... skipping 43 lines ...
STEP: Creating a kubernetes client
Oct  7 16:40:54.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Oct  7 16:40:55.945: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  7 16:41:02.235: INFO: deleting claim "volume-provisioning-614"/"pvc-4nttw"
... skipping 6 lines ...

• [SLOW TEST:7.879 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":18,"skipped":131,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:02.868: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:40:38.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":73,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:04.502: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 138 lines ...
Oct  7 16:40:46.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Oct  7 16:40:47.483: INFO: Waiting up to 5m0s for pod "downward-api-13b59520-2b60-4819-a736-394febbb8273" in namespace "downward-api-6627" to be "Succeeded or Failed"
Oct  7 16:40:47.675: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 191.411595ms
Oct  7 16:40:49.818: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335104794s
Oct  7 16:40:51.962: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47840475s
Oct  7 16:40:54.106: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 6.622339597s
Oct  7 16:40:56.249: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 8.766226171s
Oct  7 16:40:58.394: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 10.91084311s
Oct  7 16:41:00.539: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 13.055972188s
Oct  7 16:41:02.684: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Pending", Reason="", readiness=false. Elapsed: 15.201128177s
Oct  7 16:41:04.828: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.345197473s
STEP: Saw pod success
Oct  7 16:41:04.829: INFO: Pod "downward-api-13b59520-2b60-4819-a736-394febbb8273" satisfied condition "Succeeded or Failed"
Oct  7 16:41:04.971: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod downward-api-13b59520-2b60-4819-a736-394febbb8273 container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:41:05.274: INFO: Waiting for pod downward-api-13b59520-2b60-4819-a736-394febbb8273 to disappear
Oct  7 16:41:05.417: INFO: Pod downward-api-13b59520-2b60-4819-a736-394febbb8273 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:19.093 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:05.716: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":16,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:08.478: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
Oct  7 16:40:56.866: INFO: PersistentVolumeClaim pvc-lkwhj found but phase is Pending instead of Bound.
Oct  7 16:40:59.009: INFO: PersistentVolumeClaim pvc-lkwhj found and phase=Bound (15.148810774s)
Oct  7 16:40:59.009: INFO: Waiting up to 3m0s for PersistentVolume local-ghmsv to have phase Bound
Oct  7 16:40:59.158: INFO: PersistentVolume local-ghmsv found and phase=Bound (148.385537ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-slz7
STEP: Creating a pod to test subpath
Oct  7 16:40:59.589: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-slz7" in namespace "provisioning-6720" to be "Succeeded or Failed"
Oct  7 16:40:59.732: INFO: Pod "pod-subpath-test-preprovisionedpv-slz7": Phase="Pending", Reason="", readiness=false. Elapsed: 142.923207ms
Oct  7 16:41:01.875: INFO: Pod "pod-subpath-test-preprovisionedpv-slz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286181835s
Oct  7 16:41:04.019: INFO: Pod "pod-subpath-test-preprovisionedpv-slz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430431164s
Oct  7 16:41:06.165: INFO: Pod "pod-subpath-test-preprovisionedpv-slz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576441309s
STEP: Saw pod success
Oct  7 16:41:06.166: INFO: Pod "pod-subpath-test-preprovisionedpv-slz7" satisfied condition "Succeeded or Failed"
Oct  7 16:41:06.308: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-slz7 container test-container-volume-preprovisionedpv-slz7: <nil>
STEP: delete the pod
Oct  7 16:41:06.606: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-slz7 to disappear
Oct  7 16:41:06.749: INFO: Pod pod-subpath-test-preprovisionedpv-slz7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-slz7
Oct  7 16:41:06.749: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-slz7" in namespace "provisioning-6720"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":16,"skipped":93,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:08.782: INFO: Only supported for providers [vsphere] (not aws)
... skipping 31 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9807
STEP: Creating statefulset with conflicting port in namespace statefulset-9807
STEP: Waiting until pod test-pod will start running in namespace statefulset-9807
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9807
Oct  7 16:40:43.048: INFO: Observed stateful pod in namespace: statefulset-9807, name: ss-0, uid: 123342ad-fc44-493b-9b58-f4af477bf9d9, status phase: Pending. Waiting for statefulset controller to delete.
Oct  7 16:40:45.466: INFO: Observed stateful pod in namespace: statefulset-9807, name: ss-0, uid: 123342ad-fc44-493b-9b58-f4af477bf9d9, status phase: Failed. Waiting for statefulset controller to delete.
Oct  7 16:40:45.473: INFO: Observed stateful pod in namespace: statefulset-9807, name: ss-0, uid: 123342ad-fc44-493b-9b58-f4af477bf9d9, status phase: Failed. Waiting for statefulset controller to delete.
Oct  7 16:40:45.476: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9807
STEP: Removing pod with conflicting port in namespace statefulset-9807
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9807 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Oct  7 16:40:50.061: INFO: Deleting all statefulset in ns statefulset-9807
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:11.667: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Oct  7 16:41:06.590: INFO: Waiting up to 5m0s for pod "pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3" in namespace "emptydir-4921" to be "Succeeded or Failed"
Oct  7 16:41:06.733: INFO: Pod "pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 143.033899ms
Oct  7 16:41:08.877: INFO: Pod "pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287129336s
Oct  7 16:41:11.020: INFO: Pod "pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430198641s
STEP: Saw pod success
Oct  7 16:41:11.020: INFO: Pod "pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3" satisfied condition "Succeeded or Failed"
Oct  7 16:41:11.163: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3 container test-container: <nil>
STEP: delete the pod
Oct  7 16:41:11.460: INFO: Waiting for pod pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3 to disappear
Oct  7 16:41:11.603: INFO: Pod pod-64328c7f-54bc-4309-8ad7-6f96c60d22c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 16 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-ee60bfa3-d07f-44f7-a68f-d62e8547b5f4
STEP: Creating a pod to test consume secrets
Oct  7 16:41:06.638: INFO: Waiting up to 5m0s for pod "pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4" in namespace "secrets-7254" to be "Succeeded or Failed"
Oct  7 16:41:06.781: INFO: Pod "pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4": Phase="Pending", Reason="", readiness=false. Elapsed: 142.542369ms
Oct  7 16:41:08.925: INFO: Pod "pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287081867s
Oct  7 16:41:11.073: INFO: Pod "pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434853142s
STEP: Saw pod success
Oct  7 16:41:11.073: INFO: Pod "pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4" satisfied condition "Succeeded or Failed"
Oct  7 16:41:11.216: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4 container secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:41:11.508: INFO: Waiting for pod pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4 to disappear
Oct  7 16:41:11.651: INFO: Pod pod-secrets-486162c9-a55d-4b9c-90ca-815b5f02a0c4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.308 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":101,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:11.967: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
• [SLOW TEST:28.237 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":15,"skipped":86,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:12.342: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 50 lines ...
• [SLOW TEST:53.086 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":15,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:13.433: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 125 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":13,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:14.610: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 74 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct  7 16:41:12.402: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  7 16:41:12.402: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-bk5w
STEP: Creating a pod to test exec-volume-test
Oct  7 16:41:12.549: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-bk5w" in namespace "volume-9285" to be "Succeeded or Failed"
Oct  7 16:41:12.693: INFO: Pod "exec-volume-test-inlinevolume-bk5w": Phase="Pending", Reason="", readiness=false. Elapsed: 143.960657ms
Oct  7 16:41:14.839: INFO: Pod "exec-volume-test-inlinevolume-bk5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289566793s
STEP: Saw pod success
Oct  7 16:41:14.839: INFO: Pod "exec-volume-test-inlinevolume-bk5w" satisfied condition "Succeeded or Failed"
Oct  7 16:41:14.983: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod exec-volume-test-inlinevolume-bk5w container exec-container-inlinevolume-bk5w: <nil>
STEP: delete the pod
Oct  7 16:41:15.277: INFO: Waiting for pod exec-volume-test-inlinevolume-bk5w to disappear
Oct  7 16:41:15.421: INFO: Pod exec-volume-test-inlinevolume-bk5w no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-bk5w
Oct  7 16:41:15.421: INFO: Deleting pod "exec-volume-test-inlinevolume-bk5w" in namespace "volume-9285"
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:15.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9285" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":48,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:15.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5876" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:20.503: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
Oct  7 16:41:08.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Oct  7 16:41:09.509: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  7 16:41:09.800: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9415" in namespace "provisioning-9415" to be "Succeeded or Failed"
Oct  7 16:41:09.953: INFO: Pod "hostpath-symlink-prep-provisioning-9415": Phase="Pending", Reason="", readiness=false. Elapsed: 153.066188ms
Oct  7 16:41:12.097: INFO: Pod "hostpath-symlink-prep-provisioning-9415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.297370395s
STEP: Saw pod success
Oct  7 16:41:12.097: INFO: Pod "hostpath-symlink-prep-provisioning-9415" satisfied condition "Succeeded or Failed"
Oct  7 16:41:12.097: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9415" in namespace "provisioning-9415"
Oct  7 16:41:12.244: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9415" to be fully deleted
Oct  7 16:41:12.386: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zmf9
STEP: Creating a pod to test subpath
Oct  7 16:41:12.530: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zmf9" in namespace "provisioning-9415" to be "Succeeded or Failed"
Oct  7 16:41:12.674: INFO: Pod "pod-subpath-test-inlinevolume-zmf9": Phase="Pending", Reason="", readiness=false. Elapsed: 143.192689ms
Oct  7 16:41:14.820: INFO: Pod "pod-subpath-test-inlinevolume-zmf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289167204s
Oct  7 16:41:16.966: INFO: Pod "pod-subpath-test-inlinevolume-zmf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435952209s
STEP: Saw pod success
Oct  7 16:41:16.967: INFO: Pod "pod-subpath-test-inlinevolume-zmf9" satisfied condition "Succeeded or Failed"
Oct  7 16:41:17.111: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-zmf9 container test-container-subpath-inlinevolume-zmf9: <nil>
STEP: delete the pod
Oct  7 16:41:17.419: INFO: Waiting for pod pod-subpath-test-inlinevolume-zmf9 to disappear
Oct  7 16:41:17.561: INFO: Pod pod-subpath-test-inlinevolume-zmf9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zmf9
Oct  7 16:41:17.561: INFO: Deleting pod "pod-subpath-test-inlinevolume-zmf9" in namespace "provisioning-9415"
STEP: Deleting pod
Oct  7 16:41:17.704: INFO: Deleting pod "pod-subpath-test-inlinevolume-zmf9" in namespace "provisioning-9415"
Oct  7 16:41:18.002: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9415" in namespace "provisioning-9415" to be "Succeeded or Failed"
Oct  7 16:41:18.174: INFO: Pod "hostpath-symlink-prep-provisioning-9415": Phase="Pending", Reason="", readiness=false. Elapsed: 171.497076ms
Oct  7 16:41:20.318: INFO: Pod "hostpath-symlink-prep-provisioning-9415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316069901s
STEP: Saw pod success
Oct  7 16:41:20.318: INFO: Pod "hostpath-symlink-prep-provisioning-9415" satisfied condition "Succeeded or Failed"
Oct  7 16:41:20.318: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9415" in namespace "provisioning-9415"
Oct  7 16:41:20.467: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9415" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:20.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9415" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":17,"skipped":97,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:20.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-56949524-6b75-49f1-b202-92ce569ceacd
STEP: Creating a pod to test consume configMaps
Oct  7 16:41:21.669: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd" in namespace "projected-7811" to be "Succeeded or Failed"
Oct  7 16:41:21.815: INFO: Pod "pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 146.491848ms
Oct  7 16:41:23.961: INFO: Pod "pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.291730673s
STEP: Saw pod success
Oct  7 16:41:23.961: INFO: Pod "pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd" satisfied condition "Succeeded or Failed"
Oct  7 16:41:24.105: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:24.398: INFO: Waiting for pod pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd to disappear
Oct  7 16:41:24.543: INFO: Pod pod-projected-configmaps-be0f7781-8412-4ba8-8772-3915239a7cfd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:24.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7811" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":64,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 142 lines ...
• [SLOW TEST:157.982 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:158
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":8,"skipped":50,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":19,"skipped":143,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:28.085: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Oct  7 16:41:21.882: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3" in namespace "security-context-test-8910" to be "Succeeded or Failed"
Oct  7 16:41:22.025: INFO: Pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3": Phase="Pending", Reason="", readiness=false. Elapsed: 142.863338ms
Oct  7 16:41:24.169: INFO: Pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286652064s
Oct  7 16:41:26.326: INFO: Pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44405579s
Oct  7 16:41:28.470: INFO: Pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.588078478s
Oct  7 16:41:28.470: INFO: Pod "alpine-nnp-nil-3f524682-c060-4bcf-a8d9-8a7edb3ad7b3" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:28.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8910" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":18,"skipped":99,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:28.935: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":11,"skipped":69,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:11.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:18.889 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:30.804: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-1b562fcc-1675-45dc-915a-99c4c850193d
STEP: Creating a pod to test consume configMaps
Oct  7 16:41:25.879: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558" in namespace "configmap-6451" to be "Succeeded or Failed"
Oct  7 16:41:26.024: INFO: Pod "pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558": Phase="Pending", Reason="", readiness=false. Elapsed: 144.832807ms
Oct  7 16:41:28.171: INFO: Pod "pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292069986s
Oct  7 16:41:30.316: INFO: Pod "pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437439822s
STEP: Saw pod success
Oct  7 16:41:30.316: INFO: Pod "pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558" satisfied condition "Succeeded or Failed"
Oct  7 16:41:30.460: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558 container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:30.756: INFO: Waiting for pod pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558 to disappear
Oct  7 16:41:30.900: INFO: Pod pod-configmaps-f2221921-a9be-4fe8-b23d-76a251500558 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 28 lines ...
Oct  7 16:41:28.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Oct  7 16:41:28.961: INFO: Waiting up to 5m0s for pod "client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de" in namespace "containers-6902" to be "Succeeded or Failed"
Oct  7 16:41:29.104: INFO: Pod "client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de": Phase="Pending", Reason="", readiness=false. Elapsed: 143.406217ms
Oct  7 16:41:31.252: INFO: Pod "client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.291613124s
STEP: Saw pod success
Oct  7 16:41:31.253: INFO: Pod "client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de" satisfied condition "Succeeded or Failed"
Oct  7 16:41:31.401: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:31.708: INFO: Waiting for pod client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de to disappear
Oct  7 16:41:31.851: INFO: Pod client-containers-feab56ad-77a2-4a2c-a7d8-d8451ae2d1de no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6902" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":144,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:32.167: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 79 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":68,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:31.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:31.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4155" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":11,"skipped":68,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:32.334: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":13,"skipped":80,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:31.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:33.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5368" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":14,"skipped":80,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:33.474: INFO: Only supported for providers [azure] (not aws)
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:34.794: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  7 16:41:29.811: INFO: Waiting up to 5m0s for pod "pod-ad4b77c5-72d0-4459-bb0c-83dea854d099" in namespace "emptydir-3976" to be "Succeeded or Failed"
Oct  7 16:41:29.954: INFO: Pod "pod-ad4b77c5-72d0-4459-bb0c-83dea854d099": Phase="Pending", Reason="", readiness=false. Elapsed: 143.040678ms
Oct  7 16:41:32.098: INFO: Pod "pod-ad4b77c5-72d0-4459-bb0c-83dea854d099": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286899232s
Oct  7 16:41:34.242: INFO: Pod "pod-ad4b77c5-72d0-4459-bb0c-83dea854d099": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430948679s
STEP: Saw pod success
Oct  7 16:41:34.242: INFO: Pod "pod-ad4b77c5-72d0-4459-bb0c-83dea854d099" satisfied condition "Succeeded or Failed"
Oct  7 16:41:34.386: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-ad4b77c5-72d0-4459-bb0c-83dea854d099 container test-container: <nil>
STEP: delete the pod
Oct  7 16:41:34.678: INFO: Waiting for pod pod-ad4b77c5-72d0-4459-bb0c-83dea854d099 to disappear
Oct  7 16:41:34.824: INFO: Pod pod-ad4b77c5-72d0-4459-bb0c-83dea854d099 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":19,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
• [SLOW TEST:23.228 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":16,"skipped":91,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:32.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  7 16:41:33.092: INFO: Waiting up to 5m0s for pod "downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034" in namespace "downward-api-522" to be "Succeeded or Failed"
Oct  7 16:41:33.236: INFO: Pod "downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034": Phase="Pending", Reason="", readiness=false. Elapsed: 143.573828ms
Oct  7 16:41:35.380: INFO: Pod "downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287740729s
STEP: Saw pod success
Oct  7 16:41:35.380: INFO: Pod "downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034" satisfied condition "Succeeded or Failed"
Oct  7 16:41:35.524: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034 container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:41:35.825: INFO: Waiting for pod downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034 to disappear
Oct  7 16:41:35.968: INFO: Pod downward-api-d6d1d8cb-a1a8-43e4-a4c0-8cf6f74ec034 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-522" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":157,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:36.276: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
Oct  7 16:40:33.100: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1215 to register on node ip-172-20-56-61.sa-east-1.compute.internal
STEP: Creating pod
Oct  7 16:40:38.675: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  7 16:40:38.820: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qmphf] to have phase Bound
Oct  7 16:40:38.963: INFO: PersistentVolumeClaim pvc-qmphf found and phase=Bound (143.343468ms)
STEP: checking for CSIInlineVolumes feature
Oct  7 16:40:53.975: INFO: Error getting logs for pod inline-volume-26429: the server rejected our request for an unknown reason (get pods inline-volume-26429)
Oct  7 16:40:54.262: INFO: Deleting pod "inline-volume-26429" in namespace "csi-mock-volumes-1215"
Oct  7 16:40:54.406: INFO: Wait up to 5m0s for pod "inline-volume-26429" to be fully deleted
STEP: Deleting the previously created pod
Oct  7 16:41:10.694: INFO: Deleting pod "pvc-volume-tester-dkhsz" in namespace "csi-mock-volumes-1215"
Oct  7 16:41:10.841: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dkhsz" to be fully deleted
STEP: Checking CSI driver logs
Oct  7 16:41:15.273: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1215
Oct  7 16:41:15.273: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: f687f467-4832-436b-a366-24cd2fb096fb
Oct  7 16:41:15.273: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  7 16:41:15.273: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Oct  7 16:41:15.273: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-dkhsz
Oct  7 16:41:15.273: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f687f467-4832-436b-a366-24cd2fb096fb/volumes/kubernetes.io~csi/pvc-09bec5e8-959a-4475-8947-ebf8b2d1608b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-dkhsz
Oct  7 16:41:15.273: INFO: Deleting pod "pvc-volume-tester-dkhsz" in namespace "csi-mock-volumes-1215"
STEP: Deleting claim pvc-qmphf
Oct  7 16:41:15.707: INFO: Waiting up to 2m0s for PersistentVolume pvc-09bec5e8-959a-4475-8947-ebf8b2d1608b to get deleted
Oct  7 16:41:15.851: INFO: PersistentVolume pvc-09bec5e8-959a-4475-8947-ebf8b2d1608b was removed
STEP: Deleting storageclass csi-mock-volumes-1215-sctdrz9
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":6,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:36.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-4a6f034a-3672-4bad-a976-3e413e60b767
STEP: Creating a pod to test consume secrets
Oct  7 16:41:37.309: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba" in namespace "projected-6839" to be "Succeeded or Failed"
Oct  7 16:41:37.454: INFO: Pod "pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba": Phase="Pending", Reason="", readiness=false. Elapsed: 144.694646ms
Oct  7 16:41:39.598: INFO: Pod "pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288485802s
STEP: Saw pod success
Oct  7 16:41:39.598: INFO: Pod "pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba" satisfied condition "Succeeded or Failed"
Oct  7 16:41:39.741: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:41:40.053: INFO: Waiting for pod pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba to disappear
Oct  7 16:41:40.197: INFO: Pod pod-projected-secrets-ff0b4865-2c1c-495c-8b5b-7fef454eadba no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6839" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":160,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:7.327 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":17,"skipped":96,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:42.983: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 284 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2488" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":23,"skipped":167,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:44.933: INFO: Only supported for providers [openstack] (not aws)
... skipping 53 lines ...
Oct  7 16:40:57.701: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-3156gswzv
STEP: creating a claim
Oct  7 16:40:57.845: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-rrqn
STEP: Creating a pod to test exec-volume-test
Oct  7 16:40:58.276: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-rrqn" in namespace "volume-3156" to be "Succeeded or Failed"
Oct  7 16:40:58.420: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 143.397108ms
Oct  7 16:41:00.564: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287287456s
Oct  7 16:41:02.708: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431883975s
Oct  7 16:41:04.852: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575262191s
Oct  7 16:41:06.996: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719724759s
Oct  7 16:41:09.145: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869196736s
... skipping 2 lines ...
Oct  7 16:41:15.585: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 17.308349043s
Oct  7 16:41:17.730: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 19.453673149s
Oct  7 16:41:19.886: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.610087752s
Oct  7 16:41:22.033: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Pending", Reason="", readiness=false. Elapsed: 23.756730886s
Oct  7 16:41:24.177: INFO: Pod "exec-volume-test-dynamicpv-rrqn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.900773502s
STEP: Saw pod success
Oct  7 16:41:24.177: INFO: Pod "exec-volume-test-dynamicpv-rrqn" satisfied condition "Succeeded or Failed"
Oct  7 16:41:24.325: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod exec-volume-test-dynamicpv-rrqn container exec-container-dynamicpv-rrqn: <nil>
STEP: delete the pod
Oct  7 16:41:24.618: INFO: Waiting for pod exec-volume-test-dynamicpv-rrqn to disappear
Oct  7 16:41:24.761: INFO: Pod exec-volume-test-dynamicpv-rrqn no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-rrqn
Oct  7 16:41:24.761: INFO: Deleting pod "exec-volume-test-dynamicpv-rrqn" in namespace "volume-3156"
... skipping 43 lines ...
• [SLOW TEST:21.920 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":9,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:47.992: INFO: Only supported for providers [gce gke] (not aws)
... skipping 208 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":12,"skipped":74,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:39:26.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:144.964 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:41:51.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3955" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":10,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:51.843: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Oct  7 16:41:42.523: INFO: PersistentVolumeClaim pvc-krxpx found but phase is Pending instead of Bound.
Oct  7 16:41:44.666: INFO: PersistentVolumeClaim pvc-krxpx found and phase=Bound (6.572833352s)
Oct  7 16:41:44.666: INFO: Waiting up to 3m0s for PersistentVolume local-z2jsr to have phase Bound
Oct  7 16:41:44.808: INFO: PersistentVolume local-z2jsr found and phase=Bound (142.613518ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fkz8
STEP: Creating a pod to test subpath
Oct  7 16:41:45.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fkz8" in namespace "provisioning-2813" to be "Succeeded or Failed"
Oct  7 16:41:45.384: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 142.870617ms
Oct  7 16:41:47.528: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286021223s
Oct  7 16:41:49.672: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430017568s
STEP: Saw pod success
Oct  7 16:41:49.672: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8" satisfied condition "Succeeded or Failed"
Oct  7 16:41:49.815: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-fkz8 container test-container-subpath-preprovisionedpv-fkz8: <nil>
STEP: delete the pod
Oct  7 16:41:50.120: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fkz8 to disappear
Oct  7 16:41:50.264: INFO: Pod pod-subpath-test-preprovisionedpv-fkz8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fkz8
Oct  7 16:41:50.264: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fkz8" in namespace "provisioning-2813"
STEP: Creating pod pod-subpath-test-preprovisionedpv-fkz8
STEP: Creating a pod to test subpath
Oct  7 16:41:50.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fkz8" in namespace "provisioning-2813" to be "Succeeded or Failed"
Oct  7 16:41:50.694: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8": Phase="Pending", Reason="", readiness=false. Elapsed: 142.737978ms
Oct  7 16:41:52.838: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286745931s
STEP: Saw pod success
Oct  7 16:41:52.838: INFO: Pod "pod-subpath-test-preprovisionedpv-fkz8" satisfied condition "Succeeded or Failed"
Oct  7 16:41:52.986: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-fkz8 container test-container-subpath-preprovisionedpv-fkz8: <nil>
STEP: delete the pod
Oct  7 16:41:53.278: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fkz8 to disappear
Oct  7 16:41:53.421: INFO: Pod pod-subpath-test-preprovisionedpv-fkz8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fkz8
Oct  7 16:41:53.421: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fkz8" in namespace "provisioning-2813"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":15,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:55.404: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 72 lines ...
• [SLOW TEST:20.558 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":20,"skipped":107,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
... skipping 54 lines ...
Oct  7 16:40:54.697: INFO: PersistentVolumeClaim csi-hostpath784wl found but phase is Pending instead of Bound.
Oct  7 16:40:56.843: INFO: PersistentVolumeClaim csi-hostpath784wl found but phase is Pending instead of Bound.
Oct  7 16:40:58.988: INFO: PersistentVolumeClaim csi-hostpath784wl found but phase is Pending instead of Bound.
Oct  7 16:41:01.131: INFO: PersistentVolumeClaim csi-hostpath784wl found and phase=Bound (6.576192541s)
STEP: Expanding non-expandable pvc
Oct  7 16:41:01.421: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  7 16:41:01.708: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:03.994: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:05.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:07.994: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:09.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:11.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:14.002: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:15.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:18.004: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:19.996: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:21.997: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:23.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:25.998: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:27.994: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:29.996: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:31.995: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  7 16:41:32.283: INFO: Error updating pvc csi-hostpath784wl: persistentvolumeclaims "csi-hostpath784wl" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct  7 16:41:32.284: INFO: Deleting PersistentVolumeClaim "csi-hostpath784wl"
Oct  7 16:41:32.430: INFO: Waiting up to 5m0s for PersistentVolume pvc-a3e74775-6f17-4d2a-903e-100c0e416116 to get deleted
Oct  7 16:41:32.574: INFO: PersistentVolume pvc-a3e74775-6f17-4d2a-903e-100c0e416116 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-5512
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:55.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct  7 16:41:56.279: INFO: found topology map[topology.kubernetes.io/zone:sa-east-1a]
Oct  7 16:41:56.279: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct  7 16:41:56.279: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 67 lines ...
Oct  7 16:41:20.751: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7046
Oct  7 16:41:20.910: INFO: creating *v1.StatefulSet: csi-mock-volumes-7046-6784/csi-mockplugin-attacher
Oct  7 16:41:21.060: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7046"
Oct  7 16:41:21.206: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7046 to register on node ip-172-20-47-191.sa-east-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Oct  7 16:41:26.249: INFO: Error getting logs for pod inline-volume-k62sd: the server rejected our request for an unknown reason (get pods inline-volume-k62sd)
Oct  7 16:41:26.423: INFO: Deleting pod "inline-volume-k62sd" in namespace "csi-mock-volumes-7046"
Oct  7 16:41:26.599: INFO: Wait up to 5m0s for pod "inline-volume-k62sd" to be fully deleted
STEP: Deleting the previously created pod
Oct  7 16:41:34.908: INFO: Deleting pod "pvc-volume-tester-nmrgq" in namespace "csi-mock-volumes-7046"
Oct  7 16:41:35.053: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nmrgq" to be fully deleted
STEP: Checking CSI driver logs
Oct  7 16:41:41.496: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7046
Oct  7 16:41:41.496: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 3032ecc8-fcd9-4449-880e-a88aa5e51a26
Oct  7 16:41:41.496: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct  7 16:41:41.496: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Oct  7 16:41:41.496: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-nmrgq
Oct  7 16:41:41.496: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-81e1677cec71cf49fb755c09093185cb5cb970429eb865689e6a803205122a48","target_path":"/var/lib/kubelet/pods/3032ecc8-fcd9-4449-880e-a88aa5e51a26/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-nmrgq
Oct  7 16:41:41.496: INFO: Deleting pod "pvc-volume-tester-nmrgq" in namespace "csi-mock-volumes-7046"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-7046
STEP: Waiting for namespaces [csi-mock-volumes-7046] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":14,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:41:59.052: INFO: Only supported for providers [gce gke] (not aws)
... skipping 21 lines ...
Oct  7 16:41:56.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  7 16:41:57.271: INFO: Waiting up to 5m0s for pod "downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03" in namespace "downward-api-6780" to be "Succeeded or Failed"
Oct  7 16:41:57.414: INFO: Pod "downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03": Phase="Pending", Reason="", readiness=false. Elapsed: 142.656297ms
Oct  7 16:41:59.557: INFO: Pod "downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286071961s
STEP: Saw pod success
Oct  7 16:41:59.558: INFO: Pod "downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03" satisfied condition "Succeeded or Failed"
Oct  7 16:41:59.701: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03 container dapi-container: <nil>
STEP: delete the pod
Oct  7 16:41:59.993: INFO: Waiting for pod downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03 to disappear
Oct  7 16:42:00.140: INFO: Pod downward-api-8f2f96ee-5689-4ae0-9f81-2b8785eeaf03 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:00.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6780" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:00.439: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct  7 16:41:56.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Oct  7 16:41:57.648: INFO: Waiting up to 5m0s for pod "pod-46103a36-cebb-48da-9cb7-60e61219e07b" in namespace "emptydir-7754" to be "Succeeded or Failed"
Oct  7 16:41:57.791: INFO: Pod "pod-46103a36-cebb-48da-9cb7-60e61219e07b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.380417ms
Oct  7 16:41:59.935: INFO: Pod "pod-46103a36-cebb-48da-9cb7-60e61219e07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.286950861s
STEP: Saw pod success
Oct  7 16:41:59.935: INFO: Pod "pod-46103a36-cebb-48da-9cb7-60e61219e07b" satisfied condition "Succeeded or Failed"
Oct  7 16:42:00.077: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-46103a36-cebb-48da-9cb7-60e61219e07b container test-container: <nil>
STEP: delete the pod
Oct  7 16:42:00.370: INFO: Waiting for pod pod-46103a36-cebb-48da-9cb7-60e61219e07b to disappear
Oct  7 16:42:00.514: INFO: Pod pod-46103a36-cebb-48da-9cb7-60e61219e07b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:00.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7754" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":99,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:00.826: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":14,"skipped":79,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:55.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:6.021 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:01.072: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:01.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6793" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":15,"skipped":105,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:43.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Oct  7 16:41:43.988: INFO: Waiting up to 5m0s for pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" in namespace "svcaccounts-8038" to be "Succeeded or Failed"
Oct  7 16:41:44.132: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 144.313858ms
Oct  7 16:41:46.277: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288680942s
STEP: Saw pod success
Oct  7 16:41:46.277: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" satisfied condition "Succeeded or Failed"
Oct  7 16:41:46.420: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:46.716: INFO: Waiting for pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf to disappear
Oct  7 16:41:46.859: INFO: Pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf no longer exists
STEP: Creating a pod to test service account token: 
Oct  7 16:41:47.003: INFO: Waiting up to 5m0s for pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" in namespace "svcaccounts-8038" to be "Succeeded or Failed"
Oct  7 16:41:47.147: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 144.045018ms
Oct  7 16:41:49.292: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288288644s
STEP: Saw pod success
Oct  7 16:41:49.292: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" satisfied condition "Succeeded or Failed"
Oct  7 16:41:49.435: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:49.729: INFO: Waiting for pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf to disappear
Oct  7 16:41:49.872: INFO: Pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf no longer exists
STEP: Creating a pod to test service account token: 
Oct  7 16:41:50.018: INFO: Waiting up to 5m0s for pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" in namespace "svcaccounts-8038" to be "Succeeded or Failed"
Oct  7 16:41:50.161: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.219647ms
Oct  7 16:41:52.307: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Running", Reason="", readiness=true. Elapsed: 2.289211664s
Oct  7 16:41:54.459: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.441103119s
STEP: Saw pod success
Oct  7 16:41:54.459: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" satisfied condition "Succeeded or Failed"
Oct  7 16:41:54.602: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:41:54.901: INFO: Waiting for pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf to disappear
Oct  7 16:41:55.044: INFO: Pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf no longer exists
STEP: Creating a pod to test service account token: 
Oct  7 16:41:55.190: INFO: Waiting up to 5m0s for pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" in namespace "svcaccounts-8038" to be "Succeeded or Failed"
Oct  7 16:41:55.333: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 143.319168ms
Oct  7 16:41:57.476: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286749912s
Oct  7 16:41:59.620: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430156726s
Oct  7 16:42:01.764: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574682431s
STEP: Saw pod success
Oct  7 16:42:01.764: INFO: Pod "test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf" satisfied condition "Succeeded or Failed"
Oct  7 16:42:01.908: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf container agnhost-container: <nil>
STEP: delete the pod
Oct  7 16:42:02.216: INFO: Waiting for pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf to disappear
Oct  7 16:42:02.359: INFO: Pod test-pod-31e46d3d-e539-4816-9f60-86cbfec85eaf no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:19.523 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":18,"skipped":126,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:02.658: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
• [SLOW TEST:7.434 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":17,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:08.296: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 175 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":24,"skipped":171,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:08.892: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":6,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:40:10.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
Oct  7 16:40:15.801: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9083
Oct  7 16:40:15.946: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9083
Oct  7 16:40:16.094: INFO: creating *v1.StatefulSet: csi-mock-volumes-9083-3205/csi-mockplugin
Oct  7 16:40:16.247: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9083
Oct  7 16:40:16.390: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9083"
Oct  7 16:40:16.534: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9083 to register on node ip-172-20-47-191.sa-east-1.compute.internal
I1007 16:40:25.707267    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1007 16:40:25.850402    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9083","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1007 16:40:25.993755    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I1007 16:40:26.137902    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1007 16:40:26.456740    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9083","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1007 16:40:27.256094    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9083","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Oct  7 16:40:33.796: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1007 16:40:34.097567    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1007 16:40:36.993121    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I1007 16:40:39.115783    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  7 16:40:39.263: INFO: >>> kubeConfig: /root/.kube/config
I1007 16:40:40.206018    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a","storage.kubernetes.io/csiProvisionerIdentity":"1633624826210-8081-csi-mock-csi-mock-volumes-9083"}},"Response":{},"Error":"","FullError":null}
I1007 16:40:40.555463    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  7 16:40:40.701: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:40:41.699: INFO: >>> kubeConfig: /root/.kube/config
Oct  7 16:40:42.662: INFO: >>> kubeConfig: /root/.kube/config
I1007 16:40:43.618691    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a/globalmount","target_path":"/var/lib/kubelet/pods/f3235223-32fe-416e-b8c3-c102022b998a/volumes/kubernetes.io~csi/pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a","storage.kubernetes.io/csiProvisionerIdentity":"1633624826210-8081-csi-mock-csi-mock-volumes-9083"}},"Response":{},"Error":"","FullError":null}
Oct  7 16:40:46.374: INFO: Deleting pod "pvc-volume-tester-wlrtx" in namespace "csi-mock-volumes-9083"
Oct  7 16:40:46.521: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wlrtx" to be fully deleted
Oct  7 16:40:48.185: INFO: >>> kubeConfig: /root/.kube/config
I1007 16:40:49.166534    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f3235223-32fe-416e-b8c3-c102022b998a/volumes/kubernetes.io~csi/pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a/mount"},"Response":{},"Error":"","FullError":null}
I1007 16:40:49.403488    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1007 16:40:49.550299    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b3136f6d-903e-47e2-acbe-24b7fc6d155a/globalmount"},"Response":{},"Error":"","FullError":null}
I1007 16:41:06.980132    5435 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  7 16:41:07.961: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-m7wr9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9083", SelfLink:"", UID:"b3136f6d-903e-47e2-acbe-24b7fc6d155a", ResourceVersion:"14348", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769221633, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d00498), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d004b0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0028c8b70), VolumeMode:(*v1.PersistentVolumeMode)(0xc0028c8b80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  7 16:41:07.961: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-m7wr9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9083", SelfLink:"", UID:"b3136f6d-903e-47e2-acbe-24b7fc6d155a", ResourceVersion:"14352", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769221633, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-47-191.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d00678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d00690)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d006a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d006c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0028c8cc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0028c8cd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  7 16:41:07.961: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-m7wr9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9083", SelfLink:"", UID:"b3136f6d-903e-47e2-acbe-24b7fc6d155a", ResourceVersion:"14353", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769221633, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9083", "volume.kubernetes.io/selected-node":"ip-172-20-47-191.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fc68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fc80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fc98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fcb0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fcc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fce0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0034dfa20), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034dfa30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  7 16:41:07.962: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-m7wr9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9083", SelfLink:"", UID:"b3136f6d-903e-47e2-acbe-24b7fc6d155a", ResourceVersion:"14358", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769221633, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9083"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fcf8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fd10)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fd28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fd40)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fd58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fd70)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0034dfa60), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034dfa70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  7 16:41:07.962: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-m7wr9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9083", SelfLink:"", UID:"b3136f6d-903e-47e2-acbe-24b7fc6d155a", ResourceVersion:"14436", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769221633, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9083", "volume.kubernetes.io/selected-node":"ip-172-20-47-191.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fdb8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fdd0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fde8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00382fe00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00382fe18)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0034dfaa0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0034dfab0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:11.818: INFO: Only supported for providers [gce gke] (not aws)
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:12.780: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:42:09.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925" in namespace "downward-api-358" to be "Succeeded or Failed"
Oct  7 16:42:09.949: INFO: Pod "downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925": Phase="Pending", Reason="", readiness=false. Elapsed: 143.33116ms
Oct  7 16:42:12.093: INFO: Pod "downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287857341s
STEP: Saw pod success
Oct  7 16:42:12.093: INFO: Pod "downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925" satisfied condition "Succeeded or Failed"
Oct  7 16:42:12.237: INFO: Trying to get logs from node ip-172-20-47-191.sa-east-1.compute.internal pod downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925 container client-container: <nil>
STEP: delete the pod
Oct  7 16:42:12.530: INFO: Waiting for pod downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925 to disappear
Oct  7 16:42:12.674: INFO: Pod downwardapi-volume-23cf8dac-8470-4d53-b8ce-b8431b900925 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:12.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-358" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":183,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:13.048: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 52 lines ...
Oct  7 16:42:11.635: INFO: The status of Pod pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct  7 16:42:12.712: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42"
Oct  7 16:42:12.712: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42" in namespace "pods-1453" to be "terminated due to deadline exceeded"
Oct  7 16:42:12.861: INFO: Pod "pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42": Phase="Running", Reason="", readiness=true. Elapsed: 148.889127ms
Oct  7 16:42:15.004: INFO: Pod "pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.291823373s
Oct  7 16:42:15.004: INFO: Pod "pod-update-activedeadlineseconds-3382c7e2-5a6d-45d3-a060-ddf1e40fdd42" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1453" for this suite.


• [SLOW TEST:6.803 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":143,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:15.317: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Oct  7 16:41:58.728: INFO: PersistentVolumeClaim pvc-vqtjl found and phase=Bound (2.287520464s)
Oct  7 16:41:58.728: INFO: Waiting up to 3m0s for PersistentVolume nfs-clhzv to have phase Bound
Oct  7 16:41:58.872: INFO: PersistentVolume nfs-clhzv found and phase=Bound (144.134476ms)
STEP: Checking pod has write access to PersistentVolume
Oct  7 16:41:59.160: INFO: Creating nfs test pod
Oct  7 16:41:59.306: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:41:59.306: INFO: Waiting up to 5m0s for pod "pvc-tester-ggmn8" in namespace "pv-9114" to be "Succeeded or Failed"
Oct  7 16:41:59.450: INFO: Pod "pvc-tester-ggmn8": Phase="Pending", Reason="", readiness=false. Elapsed: 143.595457ms
Oct  7 16:42:01.594: INFO: Pod "pvc-tester-ggmn8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287996981s
STEP: Saw pod success
Oct  7 16:42:01.594: INFO: Pod "pvc-tester-ggmn8" satisfied condition "Succeeded or Failed"
Oct  7 16:42:01.594: INFO: Pod pvc-tester-ggmn8 succeeded 
Oct  7 16:42:01.594: INFO: Deleting pod "pvc-tester-ggmn8" in namespace "pv-9114"
Oct  7 16:42:01.757: INFO: Wait up to 5m0s for pod "pvc-tester-ggmn8" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  7 16:42:01.901: INFO: Deleting PVC pvc-vqtjl to trigger reclamation of PV 
Oct  7 16:42:01.901: INFO: Deleting PersistentVolumeClaim "pvc-vqtjl"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":11,"skipped":79,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:42:12.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc" in namespace "downward-api-6082" to be "Succeeded or Failed"
Oct  7 16:42:12.849: INFO: Pod "downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc": Phase="Pending", Reason="", readiness=false. Elapsed: 151.699867ms
Oct  7 16:42:14.994: INFO: Pod "downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.297111773s
STEP: Saw pod success
Oct  7 16:42:14.994: INFO: Pod "downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc" satisfied condition "Succeeded or Failed"
Oct  7 16:42:15.138: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc container client-container: <nil>
STEP: delete the pod
Oct  7 16:42:15.431: INFO: Waiting for pod downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc to disappear
Oct  7 16:42:15.575: INFO: Pod downwardapi-volume-850d4c70-11df-431a-a540-1d51581764cc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:15.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6082" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:15.885: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 60 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:50.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
Oct  7 16:41:59.557: INFO: PersistentVolumeClaim pvc-rdbdx found and phase=Bound (4.432401326s)
Oct  7 16:41:59.557: INFO: Waiting up to 3m0s for PersistentVolume nfs-kf5kq to have phase Bound
Oct  7 16:41:59.700: INFO: PersistentVolume nfs-kf5kq found and phase=Bound (143.632117ms)
STEP: Checking pod has write access to PersistentVolume
Oct  7 16:41:59.988: INFO: Creating nfs test pod
Oct  7 16:42:00.133: INFO: Pod should terminate with exitcode 0 (success)
Oct  7 16:42:00.133: INFO: Waiting up to 5m0s for pod "pvc-tester-2ngb7" in namespace "pv-2247" to be "Succeeded or Failed"
Oct  7 16:42:00.276: INFO: Pod "pvc-tester-2ngb7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.273196ms
Oct  7 16:42:02.421: INFO: Pod "pvc-tester-2ngb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288311806s
Oct  7 16:42:04.565: INFO: Pod "pvc-tester-2ngb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43233967s
Oct  7 16:42:06.710: INFO: Pod "pvc-tester-2ngb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577477552s
STEP: Saw pod success
Oct  7 16:42:06.710: INFO: Pod "pvc-tester-2ngb7" satisfied condition "Succeeded or Failed"
Oct  7 16:42:06.710: INFO: Pod pvc-tester-2ngb7 succeeded 
Oct  7 16:42:06.710: INFO: Deleting pod "pvc-tester-2ngb7" in namespace "pv-2247"
Oct  7 16:42:06.858: INFO: Wait up to 5m0s for pod "pvc-tester-2ngb7" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  7 16:42:07.002: INFO: Deleting PVC pvc-rdbdx to trigger reclamation of PV 
Oct  7 16:42:07.002: INFO: Deleting PersistentVolumeClaim "pvc-rdbdx"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Oct  7 16:41:53.394: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:53.538: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:53.972: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:54.117: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:54.260: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:54.404: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:54.692: INFO: Lookups using dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local]

Oct  7 16:41:59.836: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:41:59.980: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:00.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:00.269: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:00.705: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:00.848: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:00.992: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:01.136: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:01.424: INFO: Lookups using dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local]

Oct  7 16:42:04.837: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:04.981: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:05.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:05.269: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:05.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:05.845: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:05.989: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:06.133: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:06.420: INFO: Lookups using dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local]

Oct  7 16:42:09.837: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:09.981: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:10.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:10.270: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:10.703: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:10.847: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:10.991: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:11.135: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local from pod dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3: the server could not find the requested resource (get pods dns-test-8ca98130-72cc-4d85-912b-443adc914cc3)
Oct  7 16:42:11.423: INFO: Lookups using dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3093.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3093.svc.cluster.local jessie_udp@dns-test-service-2.dns-3093.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3093.svc.cluster.local]

Oct  7 16:42:16.420: INFO: DNS probes using dns-3093/dns-test-8ca98130-72cc-4d85-912b-443adc914cc3 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:16.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8708" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":26,"skipped":197,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:17.149: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:16.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3189" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":13,"skipped":111,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:17.258: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":19,"skipped":128,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:18.526: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      Driver csi-hostpath doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":73,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:41:46.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should contain last line of the log
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:605
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":10,"skipped":73,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:19.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3077" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":14,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:42:18.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8" in namespace "downward-api-5222" to be "Succeeded or Failed"
Oct  7 16:42:18.256: INFO: Pod "downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8": Phase="Pending", Reason="", readiness=false. Elapsed: 144.438258ms
Oct  7 16:42:20.401: INFO: Pod "downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289487533s
STEP: Saw pod success
Oct  7 16:42:20.401: INFO: Pod "downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8" satisfied condition "Succeeded or Failed"
Oct  7 16:42:20.544: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8 container client-container: <nil>
STEP: delete the pod
Oct  7 16:42:20.837: INFO: Waiting for pod downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8 to disappear
Oct  7 16:42:20.980: INFO: Pod downwardapi-volume-d7a1aef2-a2d2-4f64-9cdf-04388bb3c5d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:20.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5222" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":214,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:21.301: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 81 lines ...
Oct  7 16:42:11.566: INFO: PersistentVolumeClaim pvc-bfdsr found but phase is Pending instead of Bound.
Oct  7 16:42:13.710: INFO: PersistentVolumeClaim pvc-bfdsr found and phase=Bound (10.860275345s)
Oct  7 16:42:13.710: INFO: Waiting up to 3m0s for PersistentVolume local-qfbd6 to have phase Bound
Oct  7 16:42:13.854: INFO: PersistentVolume local-qfbd6 found and phase=Bound (143.757347ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tlgb
STEP: Creating a pod to test subpath
Oct  7 16:42:14.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tlgb" in namespace "provisioning-8715" to be "Succeeded or Failed"
Oct  7 16:42:14.432: INFO: Pod "pod-subpath-test-preprovisionedpv-tlgb": Phase="Pending", Reason="", readiness=false. Elapsed: 142.884798ms
Oct  7 16:42:16.578: INFO: Pod "pod-subpath-test-preprovisionedpv-tlgb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288860514s
Oct  7 16:42:18.721: INFO: Pod "pod-subpath-test-preprovisionedpv-tlgb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432270264s
STEP: Saw pod success
Oct  7 16:42:18.721: INFO: Pod "pod-subpath-test-preprovisionedpv-tlgb" satisfied condition "Succeeded or Failed"
Oct  7 16:42:18.865: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-tlgb container test-container-subpath-preprovisionedpv-tlgb: <nil>
STEP: delete the pod
Oct  7 16:42:19.159: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tlgb to disappear
Oct  7 16:42:19.303: INFO: Pod pod-subpath-test-preprovisionedpv-tlgb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tlgb
Oct  7 16:42:19.303: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tlgb" in namespace "provisioning-8715"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":21,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:24.311: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":15,"skipped":125,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:42:23.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-c5e53963-f7e6-46a8-8f5d-5e630cf522bc
STEP: Creating a pod to test consume secrets
Oct  7 16:42:19.720: INFO: Waiting up to 5m0s for pod "pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7" in namespace "secrets-2923" to be "Succeeded or Failed"
Oct  7 16:42:19.863: INFO: Pod "pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.398817ms
Oct  7 16:42:22.007: INFO: Pod "pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7": Phase="Running", Reason="", readiness=true. Elapsed: 2.287612919s
Oct  7 16:42:24.153: INFO: Pod "pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432833167s
STEP: Saw pod success
Oct  7 16:42:24.153: INFO: Pod "pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7" satisfied condition "Succeeded or Failed"
Oct  7 16:42:24.312: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7 container secret-volume-test: <nil>
STEP: delete the pod
Oct  7 16:42:24.617: INFO: Waiting for pod pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7 to disappear
Oct  7 16:42:24.761: INFO: Pod pod-secrets-96e816bf-3712-4245-b387-4e3d4f3ab4c7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.487 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":16,"skipped":125,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":74,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:25.088: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:42:22.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7" in namespace "projected-3513" to be "Succeeded or Failed"
Oct  7 16:42:22.331: INFO: Pod "downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.895957ms
Oct  7 16:42:24.479: INFO: Pod "downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.291712364s
STEP: Saw pod success
Oct  7 16:42:24.479: INFO: Pod "downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7" satisfied condition "Succeeded or Failed"
Oct  7 16:42:24.625: INFO: Trying to get logs from node ip-172-20-43-90.sa-east-1.compute.internal pod downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7 container client-container: <nil>
STEP: delete the pod
Oct  7 16:42:24.937: INFO: Waiting for pod downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7 to disappear
Oct  7 16:42:25.083: INFO: Pod downwardapi-volume-a4b750c9-8c59-447f-94d5-ba9669eac0c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:25.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3513" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":220,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:25.390: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Oct  7 16:42:18.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
Oct  7 16:42:19.385: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  7 16:42:19.713: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2284" in namespace "provisioning-2284" to be "Succeeded or Failed"
Oct  7 16:42:19.856: INFO: Pod "hostpath-symlink-prep-provisioning-2284": Phase="Pending", Reason="", readiness=false. Elapsed: 143.169107ms
Oct  7 16:42:22.000: INFO: Pod "hostpath-symlink-prep-provisioning-2284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287073069s
STEP: Saw pod success
Oct  7 16:42:22.000: INFO: Pod "hostpath-symlink-prep-provisioning-2284" satisfied condition "Succeeded or Failed"
Oct  7 16:42:22.000: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2284" in namespace "provisioning-2284"
Oct  7 16:42:22.156: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2284" to be fully deleted
Oct  7 16:42:22.300: INFO: Creating resource for inline volume
Oct  7 16:42:22.300: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct  7 16:42:22.300: INFO: Deleting pod "pod-subpath-test-inlinevolume-jrnn" in namespace "provisioning-2284"
Oct  7 16:42:22.587: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2284" in namespace "provisioning-2284" to be "Succeeded or Failed"
Oct  7 16:42:22.731: INFO: Pod "hostpath-symlink-prep-provisioning-2284": Phase="Pending", Reason="", readiness=false. Elapsed: 143.202847ms
Oct  7 16:42:24.890: INFO: Pod "hostpath-symlink-prep-provisioning-2284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.302422107s
STEP: Saw pod success
Oct  7 16:42:24.890: INFO: Pod "hostpath-symlink-prep-provisioning-2284" satisfied condition "Succeeded or Failed"
Oct  7 16:42:24.890: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2284" in namespace "provisioning-2284"
Oct  7 16:42:25.045: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2284" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:25.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2284" for this suite.
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:28.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7123" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:28.881: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":9,"skipped":43,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:42:17.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
• [SLOW TEST:11.576 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:30.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-8271" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":29,"skipped":224,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:30.336: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Oct  7 16:42:29.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  7 16:42:30.094: INFO: Waiting up to 5m0s for pod "pod-d939d8b9-6e12-4cff-9355-15778d24e0f8" in namespace "emptydir-5863" to be "Succeeded or Failed"
Oct  7 16:42:30.238: INFO: Pod "pod-d939d8b9-6e12-4cff-9355-15778d24e0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 144.624397ms
Oct  7 16:42:32.383: INFO: Pod "pod-d939d8b9-6e12-4cff-9355-15778d24e0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.289345859s
STEP: Saw pod success
Oct  7 16:42:32.383: INFO: Pod "pod-d939d8b9-6e12-4cff-9355-15778d24e0f8" satisfied condition "Succeeded or Failed"
Oct  7 16:42:32.544: INFO: Trying to get logs from node ip-172-20-56-61.sa-east-1.compute.internal pod pod-d939d8b9-6e12-4cff-9355-15778d24e0f8 container test-container: <nil>
STEP: delete the pod
Oct  7 16:42:32.842: INFO: Waiting for pod pod-d939d8b9-6e12-4cff-9355-15778d24e0f8 to disappear
Oct  7 16:42:32.985: INFO: Pod pod-d939d8b9-6e12-4cff-9355-15778d24e0f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 10 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Oct  7 16:42:31.241: INFO: Waiting up to 5m0s for pod "metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc" in namespace "projected-5185" to be "Succeeded or Failed"
Oct  7 16:42:31.385: INFO: Pod "metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.712577ms
Oct  7 16:42:33.528: INFO: Pod "metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.28726888s
STEP: Saw pod success
Oct  7 16:42:33.528: INFO: Pod "metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc" satisfied condition "Succeeded or Failed"
Oct  7 16:42:33.672: INFO: Trying to get logs from node ip-172-20-42-249.sa-east-1.compute.internal pod metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc container client-container: <nil>
STEP: delete the pod
Oct  7 16:42:33.967: INFO: Waiting for pod metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc to disappear
Oct  7 16:42:34.111: INFO: Pod metadata-volume-c947d116-006e-46c6-8350-84ad6ff83bfc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:34.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5185" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":30,"skipped":227,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:34.415: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver aws doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":45,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  7 16:42:33.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "services-7374" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•SS
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":12,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:34.457: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 80 lines ...
• [SLOW TEST:11.062 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":12,"skipped":75,"failed":1,"failures":["[sig-network] Services should be rejected when no endpoints exist"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:36.175: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1007 16:37:41.086481    5348 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct  7 16:42:41.386: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  7 16:42:41.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2770" for this suite.


• [SLOW TEST:312.172 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":17,"skipped":92,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  7 16:42:42.798: INFO: Driver local doesn't support ext3 -- skipping
... skipping 43402 lines ...






6:47:27.720883       1 service.go:446] Removing service port \"volume-3394-573/csi-hostpathplugin:dummy\"\nI1007 16:47:27.721008       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:27.766483       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.588161ms\"\nI1007 16:47:27.766724       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:27.809959       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.432194ms\"\nI1007 16:47:48.850761       1 service.go:306] Service kubectl-3365/agnhost-primary updated: 1 ports\nI1007 16:47:48.850802       1 service.go:421] Adding new service port \"kubectl-3365/agnhost-primary\" at 100.64.231.181:6379/TCP\nI1007 16:47:48.850921       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:48.888034       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.196394ms\"\nI1007 16:47:48.888322       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:48.923969       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.763593ms\"\nI1007 16:47:56.176746       1 service.go:306] Service kubectl-3365/agnhost-primary updated: 0 ports\nI1007 16:47:56.176785       1 service.go:446] Removing service port \"kubectl-3365/agnhost-primary\"\nI1007 16:47:56.176879       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:56.218102       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.30603ms\"\nI1007 16:47:56.218459       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:56.280452       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.254281ms\"\nI1007 16:48:10.853060       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:10.887162       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.223852ms\"\nI1007 16:48:10.887395       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:10.920044       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.844988ms\"\nI1007 16:48:16.088874       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:16.167989       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.114158ms\"\nI1007 16:48:16.226753       1 service.go:306] Service dns-2292/test-service-2 updated: 0 ports\nI1007 16:48:16.226792       1 service.go:446] Removing service port \"dns-2292/test-service-2:http\"\nI1007 16:48:16.226909       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:16.328122       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.319935ms\"\nI1007 16:48:17.331062       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:17.369206       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.337158ms\"\nI1007 16:48:20.215188       1 service.go:306] Service services-1840/hairpin-test updated: 1 ports\nI1007 16:48:20.215238       1 service.go:421] Adding new service port \"services-1840/hairpin-test\" at 100.70.254.218:8080/TCP\nI1007 16:48:20.215329       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:20.283971       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.72796ms\"\nI1007 16:48:20.284197       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:20.320941       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.929327ms\"\nI1007 16:48:21.547080       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:21.581576       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.5807ms\"\nI1007 16:48:22.968084       1 service.go:306] Service conntrack-3329/boom-server updated: 0 ports\nI1007 16:48:22.968124       1 service.go:446] Removing service port \"conntrack-3329/boom-server\"\nI1007 16:48:22.968223       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:23.006415       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.27896ms\"\nI1007 16:48:24.006659       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:24.038570       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.015016ms\"\nI1007 16:48:26.281468       1 service.go:306] Service services-9857/affinity-nodeport-timeout updated: 0 ports\nI1007 16:48:26.281507       1 service.go:446] Removing service port \"services-9857/affinity-nodeport-timeout\"\nI1007 16:48:26.281624       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:26.327892       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.374544ms\"\nI1007 16:48:26.328009       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:26.365618       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.678593ms\"\nI1007 16:48:32.636056       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:32.644927       1 service.go:306] Service services-1840/hairpin-test updated: 0 ports\nI1007 16:48:32.716781       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"80.82937ms\"\nI1007 16:48:32.716862       1 service.go:446] Removing service port \"services-1840/hairpin-test\"\nI1007 16:48:32.716997       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:32.771447       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.576802ms\"\nI1007 16:48:36.323342       1 service.go:306] Service services-217/externalip-test updated: 1 ports\nI1007 16:48:36.323382       1 service.go:421] Adding new service port \"services-217/externalip-test:http\" at 100.65.22.59:80/TCP\nI1007 16:48:36.323497       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:36.361083       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.69552ms\"\nI1007 16:48:36.361318       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:36.410861       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.735401ms\"\nI1007 16:48:38.438197       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:38.492539       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.44157ms\"\nI1007 16:48:39.148786       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:39.189090       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.405532ms\"\nI1007 16:48:42.394186       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:42.432229       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.101499ms\"\nI1007 16:48:42.536905       1 service.go:306] Service dns-262/test-service-2 updated: 0 ports\nI1007 16:48:42.536946       1 service.go:446] Removing service port \"dns-262/test-service-2:http\"\nI1007 16:48:42.537066       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:42.582787       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.825467ms\"\nI1007 16:48:43.584051       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:43.642863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.927837ms\"\nI1007 16:48:45.684031       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:45.717177       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.21958ms\"\nI1007 16:48:52.465624       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:52.511577       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.029432ms\"\nI1007 16:49:23.010047       1 service.go:306] Service volumemode-4802-7028/csi-hostpathplugin updated: 1 ports\nI1007 16:49:23.010091       1 service.go:421] Adding new service port \"volumemode-4802-7028/csi-hostpathplugin:dummy\" at 100.68.177.90:12345/TCP\nI1007 16:49:23.010212       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:23.042649       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.552724ms\"\nI1007 16:49:23.042879       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:23.125507       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.81217ms\"\nI1007 16:49:28.251164       1 service.go:306] Service webhook-6514/e2e-test-webhook updated: 1 ports\nI1007 16:49:28.251323       1 service.go:421] Adding new service port \"webhook-6514/e2e-test-webhook\" at 100.65.225.68:8443/TCP\nI1007 16:49:28.251578       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:28.309606       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.277113ms\"\nI1007 16:49:28.309945       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:28.364866       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.218347ms\"\nI1007 16:49:29.801448       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:29.840793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.423253ms\"\nI1007 16:49:31.977746       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:32.055524       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.901687ms\"\nI1007 16:49:32.096646       1 service.go:306] Service services-217/externalip-test updated: 0 ports\nI1007 16:49:32.096686       1 service.go:446] Removing service port \"services-217/externalip-test:http\"\nI1007 16:49:32.096817       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:32.146429       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.728755ms\"\nI1007 16:49:33.287098       1 service.go:306] Service ephemeral-2117-5082/csi-hostpathplugin updated: 1 ports\nI1007 16:49:33.287153       1 service.go:421] Adding new service port \"ephemeral-2117-5082/csi-hostpathplugin:dummy\" at 100.65.189.165:12345/TCP\nI1007 16:49:33.287271       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:33.320536       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.379724ms\"\nI1007 16:49:33.908370       1 service.go:306] Service webhook-6514/e2e-test-webhook updated: 0 ports\nI1007 16:49:34.320695       1 service.go:446] Removing service port \"webhook-6514/e2e-test-webhook\"\nI1007 16:49:34.320869       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:34.362999       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.309571ms\"\nI1007 16:49:41.665597       1 service.go:306] Service endpointslice-3787/example-empty-selector updated: 1 ports\nI1007 16:49:41.665639       1 service.go:421] Adding new service port \"endpointslice-3787/example-empty-selector:example\" at 100.71.42.85:80/TCP\nI1007 16:49:41.665763       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:41.704063       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.419852ms\"\nI1007 16:49:41.704287       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:41.750152       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.036173ms\"\nI1007 16:49:42.116199       1 service.go:306] Service endpointslice-3787/example-empty-selector updated: 0 ports\nI1007 16:49:42.750222       1 service.go:446] Removing service port \"endpointslice-3787/example-empty-selector:example\"\nI1007 16:49:42.750366       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:42.783620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.403021ms\"\nI1007 16:49:46.252327       1 service.go:306] Service provisioning-5424-2566/csi-hostpathplugin updated: 1 ports\nI1007 16:49:46.252476       1 service.go:421] Adding new service port \"provisioning-5424-2566/csi-hostpathplugin:dummy\" at 100.66.223.44:12345/TCP\nI1007 16:49:46.252598       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:46.346205       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"93.720062ms\"\nI1007 16:49:46.346376       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:46.416412       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.160681ms\"\nI1007 16:49:58.046253       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:58.101759       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.62896ms\"\nI1007 16:50:00.447631       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:00.530679       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.135659ms\"\nI1007 16:50:06.428774       1 service.go:306] Service volumemode-4802-7028/csi-hostpathplugin updated: 0 ports\nI1007 16:50:06.428814       1 service.go:446] Removing service port \"volumemode-4802-7028/csi-hostpathplugin:dummy\"\nI1007 16:50:06.428944       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:06.467701       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.877249ms\"\nI1007 16:50:06.467870       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:06.510156       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.414209ms\"\nI1007 16:50:07.127729       1 service.go:306] Service conntrack-4290/svc-udp updated: 1 ports\nI1007 16:50:07.511099       1 service.go:421] Adding new service port \"conntrack-4290/svc-udp:udp\" at 100.66.98.61:80/UDP\nI1007 16:50:07.511246       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:07.538631       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for conntrack-4290/svc-udp:udp\\\" (:31732/udp4)\"\nI1007 16:50:07.547256       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.171746ms\"\nI1007 16:50:18.691262       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-4290/svc-udp:udp\" clusterIP=\"100.66.98.61\"\nI1007 16:50:18.691338       1 proxier.go:851] Stale udp service NodePort conntrack-4290/svc-udp:udp -> 31732\nI1007 16:50:18.691366       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:18.770225       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.07472ms\"\nI1007 16:50:29.631256       1 service.go:306] Service provisioning-5424-2566/csi-hostpathplugin updated: 0 ports\nI1007 16:50:29.631298       1 service.go:446] Removing service port \"provisioning-5424-2566/csi-hostpathplugin:dummy\"\nI1007 16:50:29.631433       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:29.665408       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.100319ms\"\nI1007 16:50:29.665630       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:29.698743       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.293876ms\"\nI1007 16:50:32.398663       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:32.448148       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.571423ms\"\nI1007 16:50:34.059300       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:34.102026       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.857703ms\"\nI1007 16:50:36.565294       1 service.go:306] Service services-3374/affinity-nodeport updated: 1 ports\nI1007 16:50:36.565339       1 service.go:421] Adding new service port \"services-3374/affinity-nodeport\" at 100.71.3.67:80/TCP\nI1007 16:50:36.565459       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:36.605950       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-3374/affinity-nodeport\\\" (:30817/tcp4)\"\nI1007 16:50:36.612338       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.994309ms\"\nI1007 16:50:36.612600       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:36.654313       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.936173ms\"\nI1007 16:50:38.200725       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:38.234696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.055385ms\"\nI1007 16:50:38.720354       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:38.753327       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.056059ms\"\nI1007 16:50:39.754293       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:39.825453       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"71.301476ms\"\nI1007 16:50:46.925139       1 service.go:306] Service ephemeral-2117-5082/csi-hostpathplugin updated: 0 ports\nI1007 16:50:46.925176       1 service.go:446] Removing service port \"ephemeral-2117-5082/csi-hostpathplugin:dummy\"\nI1007 16:50:46.925321       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:46.972485       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.293388ms\"\nI1007 16:50:46.972641       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:47.013723       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.187916ms\"\nI1007 16:50:47.067186       1 service.go:306] Service provisioning-3701-4465/csi-hostpathplugin updated: 1 ports\nI1007 16:50:48.014583       1 service.go:421] Adding new service port \"provisioning-3701-4465/csi-hostpathplugin:dummy\" at 100.68.68.220:12345/TCP\nI1007 16:50:48.014755       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:48.067171       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.619805ms\"\nI1007 16:50:50.404333       1 service.go:306] Service conntrack-4290/svc-udp updated: 0 ports\nI1007 16:50:50.404372       1 service.go:446] Removing service port \"conntrack-4290/svc-udp:udp\"\nI1007 16:50:50.404594       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:50.450981       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.60016ms\"\nI1007 16:50:50.451494       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:50.500412       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.272466ms\"\nI1007 16:50:51.984922       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:52.029927       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.143136ms\"\nI1007 16:50:53.030872       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:53.068512       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.774319ms\"\nI1007 16:50:54.663856       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:54.707261       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.466769ms\"\nI1007 16:50:57.778548       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:57.823225       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.802067ms\"\nI1007 16:50:57.824091       1 service.go:306] Service services-3374/affinity-nodeport updated: 0 ports\nI1007 16:50:57.824251       1 service.go:446] Removing service port \"services-3374/affinity-nodeport\"\nI1007 16:50:57.824474       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:57.861475       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.211891ms\"\nI1007 16:50:57.979532       1 service.go:306] Service webhook-1096/e2e-test-webhook updated: 1 ports\nI1007 16:50:58.862657       1 service.go:421] Adding new service port \"webhook-1096/e2e-test-webhook\" at 100.69.175.66:8443/TCP\nI1007 16:50:58.862860       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:58.897586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.947244ms\"\nI1007 16:51:00.326687       1 service.go:306] Service webhook-1096/e2e-test-webhook updated: 0 ports\nI1007 16:51:00.326732       1 service.go:446] Removing service port \"webhook-1096/e2e-test-webhook\"\nI1007 16:51:00.326861       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:00.440477       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"113.413951ms\"\nI1007 16:51:01.441009       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:01.499966       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.070461ms\"\nI1007 16:51:02.886882       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:02.960727       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.967782ms\"\nI1007 16:51:04.248734       1 service.go:306] Service services-8936/service-headless-toggled updated: 1 ports\nI1007 16:51:04.248779       1 service.go:421] Adding new service port \"services-8936/service-headless-toggled\" at 100.65.15.87:80/TCP\nI1007 16:51:04.248973       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:04.280770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.986114ms\"\nI1007 16:51:04.280974       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:04.312405       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.596539ms\"\nI1007 16:51:05.748822       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:05.784869       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.183153ms\"\nI1007 16:51:07.201187       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:07.233327       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.333572ms\"\nI1007 16:51:07.442249       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:07.442292       1 service.go:421] Adding new service port \"services-5652/test-service-2m4cv:http\" at 100.66.133.23:80/TCP\nI1007 16:51:07.442415       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:07.474091       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.794824ms\"\nI1007 16:51:07.884813       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:08.317973       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:08.318022       1 service.go:423] Updating existing service port \"services-5652/test-service-2m4cv:http\" at 100.66.133.23:80/TCP\nI1007 16:51:08.318143       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:08.353393       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.366568ms\"\nI1007 16:51:08.900979       1 service.go:306] Service services-5652/test-service-2m4cv updated: 0 ports\nI1007 16:51:09.353549       1 service.go:446] Removing service port \"services-5652/test-service-2m4cv:http\"\nI1007 16:51:09.353727       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:09.389201       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.660863ms\"\nI1007 16:51:13.088074       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:13.173326       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.42406ms\"\nI1007 16:51:19.094776       1 service.go:306] Service services-5777/nodeport-reuse updated: 1 ports\nI1007 16:51:19.094859       1 service.go:421] Adding new service port \"services-5777/nodeport-reuse\" at 100.67.22.203:80/TCP\nI1007 16:51:19.094995       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:19.134361       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5777/nodeport-reuse\\\" (:30730/tcp4)\"\nI1007 16:51:19.139740       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.870739ms\"\nI1007 16:51:19.139904       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:19.199199       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.404172ms\"\nI1007 16:51:19.240590       1 service.go:306] Service services-5777/nodeport-reuse updated: 0 ports\nI1007 16:51:19.772308       1 service.go:306] Service provisioning-3701-4465/csi-hostpathplugin updated: 0 ports\nI1007 16:51:20.200180       1 service.go:446] Removing service port \"provisioning-3701-4465/csi-hostpathplugin:dummy\"\nI1007 16:51:20.200228       1 service.go:446] Removing service port \"services-5777/nodeport-reuse\"\nI1007 16:51:20.200376       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:20.250388       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.205251ms\"\nI1007 16:51:21.502208       1 service.go:306] Service services-1689/affinity-clusterip-timeout updated: 1 ports\nI1007 16:51:21.502267       1 service.go:421] Adding new service port \"services-1689/affinity-clusterip-timeout\" at 100.64.164.159:80/TCP\nI1007 16:51:21.502391       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:21.540671       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.403386ms\"\nI1007 16:51:22.541516       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:22.581430       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.000809ms\"\nI1007 16:51:23.311419       1 service.go:306] Service services-5777/nodeport-reuse updated: 1 ports\nI1007 16:51:23.311526       1 service.go:421] Adding new service port \"services-5777/nodeport-reuse\" at 100.65.164.220:80/TCP\nI1007 16:51:23.311652       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:23.342759       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5777/nodeport-reuse\\\" (:30730/tcp4)\"\nI1007 16:51:23.347574       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.045639ms\"\nI1007 16:51:23.460645       1 service.go:306] Service services-5777/nodeport-reuse updated: 0 ports\nI1007 16:51:24.348157       1 service.go:446] Removing service port \"services-5777/nodeport-reuse\"\nI1007 16:51:24.348367       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:24.389710       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.550606ms\"\nI1007 16:51:25.708302       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:25.745273       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.061914ms\"\nI1007 16:51:33.870197       1 service.go:306] Service ephemeral-3921-4265/csi-hostpathplugin updated: 1 ports\nI1007 16:51:33.870252       1 service.go:421] Adding new service port \"ephemeral-3921-4265/csi-hostpathplugin:dummy\" at 100.68.219.199:12345/TCP\nI1007 16:51:33.870454       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:33.905356       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.10286ms\"\nI1007 16:51:33.905614       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:33.998856       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"93.321185ms\"\nI1007 16:51:36.808350       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:36.870093       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.868366ms\"\nI1007 16:51:39.337145       1 service.go:306] Service volume-expand-1517-9173/csi-hostpathplugin updated: 1 ports\nI1007 16:51:39.337414       1 service.go:421] Adding new service port \"volume-expand-1517-9173/csi-hostpathplugin:dummy\" at 100.67.102.208:12345/TCP\nI1007 16:51:39.337565       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:39.380497       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.294835ms\"\nI1007 16:51:39.380696       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:39.416031       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.493638ms\"\nI1007 16:51:43.254192       1 service.go:306] Service services-3820/affinity-clusterip updated: 1 ports\nI1007 16:51:43.254240       1 service.go:421] Adding new service port \"services-3820/affinity-clusterip\" at 100.67.30.220:80/TCP\nI1007 16:51:43.254371       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:43.289195       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.948847ms\"\nI1007 16:51:43.289411       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:43.326998       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.756953ms\"\nI1007 16:51:45.781688       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:45.884697       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.130629ms\"\nI1007 16:51:48.673485       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:48.796417       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"123.063278ms\"\nI1007 16:51:50.404218       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:50.442882       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.756609ms\"\nI1007 16:51:51.178492       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:51.219883       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.590747ms\"\nI1007 16:51:55.775388       1 service.go:306] Service webhook-7857/e2e-test-webhook updated: 1 ports\nI1007 16:51:55.775437       1 service.go:421] Adding new service port \"webhook-7857/e2e-test-webhook\" at 100.64.99.161:8443/TCP\nI1007 16:51:55.775571       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:55.836167       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.716973ms\"\nI1007 16:51:55.836434       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:55.878262       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.989731ms\"\nI1007 16:51:58.107478       1 service.go:306] Service webhook-7857/e2e-test-webhook updated: 0 ports\nI1007 16:51:58.107519       1 service.go:446] Removing service port \"webhook-7857/e2e-test-webhook\"\nI1007 16:51:58.107666       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:58.185604       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.073822ms\"\nI1007 16:51:58.185760       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:58.292247       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"106.596921ms\"\nI1007 16:52:01.821790       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:02.291997       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"470.338037ms\"\nI1007 16:52:02.832194       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:02.891559       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.509977ms\"\nI1007 16:52:11.199252       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:11.244109       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.973425ms\"\nI1007 16:52:11.244370       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:11.278841       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.693817ms\"\nI1007 16:52:16.846610       1 service.go:306] Service services-3820/affinity-clusterip updated: 0 ports\nI1007 16:52:16.846899       1 service.go:446] Removing service port \"services-3820/affinity-clusterip\"\nI1007 16:52:16.847275       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:16.908465       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.558871ms\"\nI1007 16:52:16.908618       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:16.964638       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.131262ms\"\nI1007 16:52:24.630261       1 service.go:306] Service services-1689/affinity-clusterip-timeout updated: 0 ports\nI1007 16:52:24.630300       1 service.go:446] Removing service port \"services-1689/affinity-clusterip-timeout\"\nI1007 16:52:24.630442       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:24.668050       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.739185ms\"\nI1007 16:52:24.668294       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:24.703341       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.235031ms\"\nI1007 16:52:30.113662       1 service.go:306] Service provisioning-4145-5462/csi-hostpathplugin updated: 1 ports\nI1007 16:52:30.113703       1 service.go:421] Adding new service port \"provisioning-4145-5462/csi-hostpathplugin:dummy\" at 100.70.34.79:12345/TCP\nI1007 16:52:30.113866       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:30.186696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.226465ms\"\nI1007 16:52:30.186929       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:30.247417       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.665637ms\"\nI1007 16:52:36.701505       1 service.go:306] Service services-7319/multi-endpoint-test updated: 2 ports\nI1007 16:52:36.701556       1 service.go:421] Adding new service port \"services-7319/multi-endpoint-test:portname1\" at 100.70.254.159:80/TCP\nI1007 16:52:36.701574       1 service.go:421] Adding new service port \"services-7319/multi-endpoint-test:portname2\" at 100.70.254.159:81/TCP\nI1007 16:52:36.701768       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:36.750779       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.21825ms\"\nI1007 16:52:36.751148       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:36.794247       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.427332ms\"\nI1007 16:52:37.796478       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:37.914047       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"117.793072ms\"\nI1007 16:52:38.914409       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:38.950381       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.180988ms\"\nI1007 16:52:45.904252       1 service.go:306] Service ephemeral-3921-4265/csi-hostpathplugin updated: 0 ports\nI1007 16:52:45.904290       1 service.go:446] Removing service port \"ephemeral-3921-4265/csi-hostpathplugin:dummy\"\nI1007 16:52:45.904395       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:45.944364       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.063635ms\"\nI1007 16:52:45.944583       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:46.028837       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.426668ms\"\nI1007 16:52:47.029788       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:47.080122       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.487405ms\"\nI1007 16:52:49.747386       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:49.843999       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.702117ms\"\nI1007 16:52:50.489926       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:50.641630       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"151.793898ms\"\nI1007 16:52:51.049265       1 service.go:306] Service services-7319/multi-endpoint-test updated: 0 ports\nI1007 16:52:51.049313       1 service.go:446] Removing service port \"services-7319/multi-endpoint-test:portname1\"\nI1007 16:52:51.049327       1 service.go:446] Removing service port \"services-7319/multi-endpoint-test:portname2\"\nI1007 16:52:51.049453       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:51.108906       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.583298ms\"\nI1007 16:52:52.109146       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:52.157054       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.018911ms\"\nI1007 16:52:52.666818       1 service.go:306] Service webhook-7449/e2e-test-webhook updated: 1 ports\nI1007 16:52:53.158360       1 service.go:421] Adding new service port \"webhook-7449/e2e-test-webhook\" at 100.65.168.220:8443/TCP\nI1007 16:52:53.158531       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:53.199476       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.147103ms\"\nI1007 16:52:54.996191       1 service.go:306] Service webhook-7449/e2e-test-webhook updated: 0 ports\nI1007 16:52:54.996235       1 service.go:446] Removing service port \"webhook-7449/e2e-test-webhook\"\nI1007 16:52:54.996327       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:55.036793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.541525ms\"\nI1007 16:52:55.037065       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:55.079592       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.620005ms\"\nI1007 16:52:58.175497       1 service.go:306] Service aggregator-685/sample-api updated: 1 ports\nI1007 16:52:58.175543       1 service.go:421] Adding new service port \"aggregator-685/sample-api\" at 100.70.96.0:7443/TCP\nI1007 16:52:58.175670       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:58.212637       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.088284ms\"\nI1007 16:52:58.212860       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:58.249561       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.884422ms\"\nI1007 16:53:13.281656       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:13.314556       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.990437ms\"\nI1007 16:53:15.726939       1 service.go:306] Service endpointslice-579/example-int-port updated: 1 ports\nI1007 16:53:15.726985       1 service.go:421] Adding new service port \"endpointslice-579/example-int-port:example\" at 100.64.169.186:80/TCP\nI1007 16:53:15.727129       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:15.870476       1 service.go:306] Service endpointslice-579/example-named-port updated: 1 ports\nI1007 16:53:15.911541       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"184.528128ms\"\nI1007 16:53:15.911585       1 service.go:421] Adding new service port \"endpointslice-579/example-named-port:http\" at 100.70.41.205:80/TCP\nI1007 16:53:15.911760       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:16.021334       1 service.go:306] Service endpointslice-579/example-no-match updated: 1 ports\nI1007 16:53:16.061543       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"149.953386ms\"\nI1007 16:53:17.061854       1 service.go:421] Adding new service port \"endpointslice-579/example-no-match:example-no-match\" at 100.69.23.123:80/TCP\nI1007 16:53:17.062009       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:17.095660       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.841342ms\"\nI1007 16:53:18.095891       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:18.129950       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.204854ms\"\nI1007 16:53:19.018761       1 service.go:306] Service aggregator-685/sample-api updated: 0 ports\nI1007 16:53:19.018801       1 service.go:446] Removing service port \"aggregator-685/sample-api\"\nI1007 16:53:19.018922       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:19.061925       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.113814ms\"\nI1007 16:53:20.062206       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:20.105170       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.086629ms\"\nI1007 16:53:22.724302       1 service.go:306] Service endpointslicemirroring-8956/example-custom-endpoints updated: 1 ports\nI1007 16:53:22.724346       1 service.go:421] Adding new service port \"endpointslicemirroring-8956/example-custom-endpoints:example\" at 100.69.158.4:80/TCP\nI1007 16:53:22.724482       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:22.759624       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.272649ms\"\nI1007 16:53:22.885543       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:22.919600       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.15352ms\"\nI1007 16:53:23.730383       1 service.go:306] Service services-5878/nodeport-update-service updated: 1 ports\nI1007 16:53:23.730425       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service\" at 100.68.121.21:80/TCP\nI1007 16:53:23.730572       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:23.769376       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.940435ms\"\nI1007 16:53:24.025022       1 service.go:306] Service services-5878/nodeport-update-service updated: 1 ports\nI1007 16:53:24.770047       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service:tcp-port\" at 100.68.121.21:80/TCP\nI1007 16:53:24.770075       1 service.go:446] Removing service port \"services-5878/nodeport-update-service\"\nI1007 16:53:24.770212       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:24.804893       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5878/nodeport-update-service:tcp-port\\\" (:30229/tcp4)\"\nI1007 16:53:24.814008       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.001962ms\"\nI1007 16:53:27.369512       1 service.go:306] Service volume-expand-4617-6577/csi-hostpathplugin updated: 1 ports\nI1007 16:53:27.369557       1 service.go:421] Adding new service port \"volume-expand-4617-6577/csi-hostpathplugin:dummy\" at 100.71.172.172:12345/TCP\nI1007 16:53:27.369662       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:27.404813       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.252623ms\"\nI1007 16:53:27.405073       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:27.438902       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.036236ms\"\nI1007 16:53:28.403448       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:28.452364       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.047062ms\"\nI1007 16:53:29.017024       1 service.go:306] Service endpointslicemirroring-8956/example-custom-endpoints updated: 0 ports\nI1007 16:53:29.452614       1 service.go:446] Removing service port \"endpointslicemirroring-8956/example-custom-endpoints:example\"\nI1007 16:53:29.452803       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:29.486141       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.547921ms\"\nI1007 16:53:30.192631       1 service.go:306] Service volume-expand-1517-9173/csi-hostpathplugin updated: 0 ports\nI1007 16:53:30.486359       1 service.go:446] Removing service port \"volume-expand-1517-9173/csi-hostpathplugin:dummy\"\nI1007 16:53:30.486668       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:30.525013       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.643437ms\"\nI1007 16:53:31.232179       1 service.go:306] Service provisioning-4145-5462/csi-hostpathplugin updated: 0 ports\nI1007 16:53:31.495173       1 service.go:306] Service webhook-1328/e2e-test-webhook updated: 1 ports\nI1007 16:53:31.495219       1 service.go:446] Removing service port \"provisioning-4145-5462/csi-hostpathplugin:dummy\"\nI1007 16:53:31.495241       1 service.go:421] Adding new service port \"webhook-1328/e2e-test-webhook\" at 100.64.231.106:8443/TCP\nI1007 16:53:31.495471       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:31.534664       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.410501ms\"\nI1007 16:53:32.535120       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:32.673399       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"138.569483ms\"\nI1007 16:53:34.092776       1 service.go:306] Service webhook-1328/e2e-test-webhook updated: 0 ports\nI1007 16:53:34.092827       1 service.go:446] Removing service port \"webhook-1328/e2e-test-webhook\"\nI1007 16:53:34.093048       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:34.130538       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.700153ms\"\nI1007 16:53:35.130792       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:35.179407       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.746282ms\"\nI1007 16:53:37.454651       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:37.500518       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.961907ms\"\nI1007 16:53:37.599146       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:37.645273       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.214021ms\"\nI1007 16:53:38.459687       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:38.506557       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.960109ms\"\nI1007 16:53:39.507498       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:39.623770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"116.639714ms\"\nI1007 16:53:43.590262       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:43.638843       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.67053ms\"\nI1007 16:53:44.411414       1 service.go:306] Service webhook-1193/e2e-test-webhook updated: 1 ports\nI1007 16:53:44.411462       1 service.go:421] Adding new service port \"webhook-1193/e2e-test-webhook\" at 100.64.219.112:8443/TCP\nI1007 16:53:44.411638       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:44.487610       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.138878ms\"\nI1007 16:53:45.394187       1 service.go:306] Service kubectl-7891/agnhost-primary updated: 1 ports\nI1007 16:53:45.394230       1 service.go:421] Adding new service port \"kubectl-7891/agnhost-primary\" at 100.66.139.182:6379/TCP\nI1007 16:53:45.394338       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:45.433307       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.066659ms\"\nI1007 16:53:46.434271       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:46.522245       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.129106ms\"\nI1007 16:53:49.317070       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:49.354777       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.890401ms\"\nI1007 16:53:49.658244       1 service.go:306] Service webhook-1193/e2e-test-webhook updated: 0 ports\nI1007 16:53:49.658350       1 service.go:446] Removing service port \"webhook-1193/e2e-test-webhook\"\nI1007 16:53:49.658539       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:49.697067       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.72225ms\"\nI1007 16:53:50.697358       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:50.748397       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.16799ms\"\nI1007 16:53:53.379820       1 service.go:306] Service endpointslice-579/example-int-port updated: 0 ports\nI1007 16:53:53.379881       1 service.go:446] Removing service port \"endpointslice-579/example-int-port:example\"\nI1007 16:53:53.380027       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:53.394100       1 service.go:306] Service endpointslice-579/example-named-port updated: 0 ports\nI1007 16:53:53.417097       1 service.go:306] Service endpointslice-579/example-no-match updated: 0 ports\nI1007 16:53:53.421751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.859804ms\"\nI1007 16:53:53.421782       1 service.go:446] Removing service port \"endpointslice-579/example-named-port:http\"\nI1007 16:53:53.421796       1 service.go:446] Removing service port \"endpointslice-579/example-no-match:example-no-match\"\nI1007 16:53:53.421946       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:53.472447       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.651779ms\"\nI1007 16:53:54.154072       1 service.go:306] Service services-5878/nodeport-update-service updated: 2 ports\nI1007 16:53:54.472672       1 service.go:423] Updating existing service port \"services-5878/nodeport-update-service:tcp-port\" at 100.68.121.21:80/TCP\nI1007 16:53:54.472721       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service:udp-port\" at 100.68.121.21:80/UDP\nI1007 16:53:54.472945       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-5878/nodeport-update-service:udp-port\" clusterIP=\"100.68.121.21\"\nI1007 16:53:54.473021       1 proxier.go:851] Stale udp service NodePort services-5878/nodeport-update-service:udp-port -> 32158\nI1007 16:53:54.473047       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:54.501668       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5878/nodeport-update-service:udp-port\\\" (:32158/udp4)\"\nI1007 16:53:54.517209       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.457157ms\"\nW1007 16:53:55.151689       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-42-249.sa-east-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-43-90.sa-east-1.compute.internal ====\nI1007 16:32:21.902374       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1007 16:32:21.903029       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1007 16:32:21.903155       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1007 16:32:21.903225       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1007 16:32:21.903301       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1007 16:32:21.903372       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1007 16:32:21.903442       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1007 16:32:21.903513       1 flags.go:59] FLAG: --config=\"\"\nI1007 16:32:21.903596       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1007 16:32:21.903651       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1007 16:32:21.903718       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1007 16:32:21.903765       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1007 16:32:21.903851       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1007 16:32:21.903912       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1007 16:32:21.903967       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1007 16:32:21.904056       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1007 16:32:21.904130       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1007 16:32:21.904245       1 flags.go:59] FLAG: --help=\"false\"\nI1007 16:32:21.904356       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-43-90.sa-east-1.compute.internal\"\nI1007 16:32:21.904423       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1007 16:32:21.904480       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1007 16:32:21.904541       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1007 16:32:21.904597       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1007 16:32:21.904690       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1007 16:32:21.904746       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1007 16:32:21.904803       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1007 16:32:21.904884       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1007 16:32:21.904938       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1007 16:32:21.905003       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1007 16:32:21.905056       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1007 16:32:21.905123       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1007 16:32:21.905187       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1007 16:32:21.905249       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1007 16:32:21.905324       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1007 16:32:21.905391       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1007 16:32:21.905469       1 flags.go:59] FLAG: --log-dir=\"\"\nI1007 16:32:21.905526       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1007 16:32:21.905590       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1007 16:32:21.905645       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1007 16:32:21.905705       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1007 16:32:21.905757       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1007 16:32:21.905821       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1007 16:32:21.905871       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io\"\nI1007 16:32:21.905953       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1007 16:32:21.906000       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1007 16:32:21.906079       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1007 16:32:21.906141       1 flags.go:59] FLAG: --one-output=\"false\"\nI1007 16:32:21.906201       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1007 16:32:21.906258       1 flags.go:59] FLAG: --profiling=\"false\"\nI1007 16:32:21.906326       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1007 16:32:21.906390       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1007 16:32:21.906454       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1007 16:32:21.906516       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1007 16:32:21.908720       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1007 16:32:21.908740       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1007 16:32:21.908746       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1007 16:32:21.908762       1 flags.go:59] FLAG: --v=\"2\"\nI1007 16:32:21.908768       1 flags.go:59] FLAG: --version=\"false\"\nI1007 16:32:21.908777       1 flags.go:59] FLAG: --vmodule=\"\"\nI1007 16:32:21.908783       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1007 16:32:21.908790       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1007 16:32:21.908904       1 feature_gate.go:243] feature gates: &{map[]}\nI1007 16:32:21.909031       1 feature_gate.go:243] feature gates: &{map[]}\nI1007 16:32:21.965008       1 node.go:172] Successfully retrieved node IP: 172.20.43.90\nI1007 16:32:21.965045       1 server_others.go:140] Detected node IP 172.20.43.90\nW1007 16:32:21.965183       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI1007 16:32:21.965356       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1007 16:32:21.999350       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1007 16:32:21.999386       1 server_others.go:212] Using iptables Proxier.\nI1007 16:32:21.999419       1 server_others.go:219] creating dualStackProxier for iptables.\nW1007 16:32:21.999440       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1007 16:32:21.999603       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1007 16:32:21.999748       1 proxier.go:276] \"missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI1007 16:32:21.999772       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1007 16:32:21.999911       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1007 16:32:22.000004       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI1007 16:32:22.000237       1 proxier.go:276] \"missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI1007 16:32:22.000255       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1007 16:32:22.000286       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1007 16:32:22.000308       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI1007 16:32:22.000468       1 server.go:643] Version: v1.21.5\nI1007 16:32:22.001435       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI1007 16:32:22.001466       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1007 16:32:22.001570       1 mount_linux.go:197] Detected OS without systemd\nI1007 16:32:22.001809       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI1007 16:32:22.009409       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1007 16:32:22.009809       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1007 16:32:22.010448       1 config.go:315] Starting service config controller\nI1007 16:32:22.010578       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1007 16:32:22.010710       1 config.go:224] Starting endpoint slice config controller\nI1007 16:32:22.010775       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nW1007 16:32:22.012887       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:32:22.013003       1 service.go:306] Service default/kubernetes updated: 1 ports\nI1007 16:32:22.013174       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nW1007 16:32:22.014265       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:32:22.111485       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1007 16:32:22.111485       1 shared_informer.go:247] Caches are synced for service config \nI1007 16:32:22.111629       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1007 16:32:22.111718       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1007 16:32:22.111730       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1007 16:32:22.111753       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1007 16:32:22.111766       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1007 16:32:22.111778       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1007 16:32:22.111824       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:22.162830       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.106186ms\"\nI1007 16:32:22.162987       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:22.197685       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.816166ms\"\nI1007 16:32:42.166059       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1007 16:32:42.166089       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:42.217970       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.979199ms\"\nI1007 16:32:46.638052       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:46.673207       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.295277ms\"\nI1007 16:32:47.443288       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:47.515700       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.459786ms\"\nI1007 16:32:48.515972       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:48.562287       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.435827ms\"\nI1007 16:35:24.208188       1 service.go:306] Service services-1343/no-pods updated: 1 ports\nI1007 16:35:24.208255       1 service.go:421] Adding new service port \"services-1343/no-pods\" at 100.69.169.152:80/TCP\nI1007 16:35:24.208293       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:24.246825       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.58598ms\"\nI1007 16:35:24.247007       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:24.286896       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.030102ms\"\nI1007 16:35:30.192665       1 service.go:306] Service provisioning-7539-5141/csi-hostpathplugin updated: 1 ports\nI1007 16:35:30.192706       1 service.go:421] Adding new service port \"provisioning-7539-5141/csi-hostpathplugin:dummy\" at 100.71.122.109:12345/TCP\nI1007 16:35:30.192749       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:30.232337       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.620996ms\"\nI1007 16:35:30.232534       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:30.270510       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.004288ms\"\nI1007 16:35:36.477382       1 service.go:306] Service pods-4148/fooservice updated: 1 ports\nI1007 16:35:36.477434       1 service.go:421] Adding new service port \"pods-4148/fooservice\" at 100.69.97.29:8765/TCP\nI1007 16:35:36.477484       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:36.520630       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.185743ms\"\nI1007 16:35:36.520715       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:36.573272       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.58275ms\"\nI1007 16:35:41.489934       1 service.go:306] Service kubectl-945/agnhost-replica updated: 1 ports\nI1007 16:35:41.490051       1 service.go:421] Adding new service port \"kubectl-945/agnhost-replica\" at 100.68.29.108:6379/TCP\nI1007 16:35:41.490238       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:41.665594       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"175.541048ms\"\nI1007 16:35:41.665725       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:41.754542       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.889729ms\"\nI1007 16:35:42.272163       1 service.go:306] Service kubectl-945/agnhost-primary updated: 1 ports\nI1007 16:35:42.756719       1 service.go:421] Adding new service port \"kubectl-945/agnhost-primary\" at 100.68.209.227:6379/TCP\nI1007 16:35:42.756847       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:42.858216       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.546565ms\"\nI1007 16:35:43.049709       1 service.go:306] Service kubectl-945/frontend updated: 1 ports\nI1007 16:35:43.859220       1 service.go:421] Adding new service port \"kubectl-945/frontend\" at 100.70.223.216:80/TCP\nI1007 16:35:43.859298       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:43.926438       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.231994ms\"\nI1007 16:35:44.477073       1 service.go:306] Service ephemeral-8231-6366/csi-hostpathplugin updated: 1 ports\nI1007 16:35:44.496942       1 service.go:421] Adding new service port \"ephemeral-8231-6366/csi-hostpathplugin:dummy\" at 100.64.143.105:12345/TCP\nI1007 16:35:44.497006       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:44.534133       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.20985ms\"\nI1007 16:35:45.078343       1 service.go:306] Service pods-4148/fooservice updated: 0 ports\nI1007 16:35:45.534330       1 service.go:446] Removing service port \"pods-4148/fooservice\"\nI1007 16:35:45.534434       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:45.568746       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.418438ms\"\nI1007 16:35:47.416656       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:47.474596       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.946761ms\"\nI1007 16:35:48.316766       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:48.386090       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.360297ms\"\nI1007 16:35:48.566370       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:48.588227       1 service.go:306] Service proxy-5208/test-service updated: 1 ports\nI1007 16:35:48.612721       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.420301ms\"\nI1007 16:35:49.177671       1 service.go:306] Service webhook-5849/e2e-test-webhook updated: 1 ports\nI1007 16:35:49.612996       1 service.go:421] Adding new service port \"proxy-5208/test-service\" at 100.69.44.245:80/TCP\nI1007 16:35:49.613026       1 service.go:421] Adding new service port \"webhook-5849/e2e-test-webhook\" at 100.67.80.195:8443/TCP\nI1007 16:35:49.613137       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:49.676475       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.502733ms\"\nI1007 16:35:50.676970       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:50.716244       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.35262ms\"\nI1007 16:35:53.334691       1 service.go:306] Service webhook-5849/e2e-test-webhook updated: 0 ports\nI1007 16:35:53.334738       1 service.go:446] Removing service port \"webhook-5849/e2e-test-webhook\"\nI1007 16:35:53.334794       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:53.371846       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.09741ms\"\nI1007 16:35:53.425021       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:53.464199       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.2172ms\"\nI1007 16:35:54.465002       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:54.513857       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.955452ms\"\nI1007 16:35:55.969746       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:55.982796       1 service.go:306] Service proxy-5208/test-service updated: 0 ports\nI1007 16:35:56.110651       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"140.935705ms\"\nI1007 16:35:57.111640       1 service.go:446] Removing service port \"proxy-5208/test-service\"\nI1007 16:35:57.111753       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:57.148712       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.071347ms\"\nI1007 16:35:59.038035       1 service.go:306] Service volume-expand-3529-148/csi-hostpathplugin updated: 1 ports\nI1007 16:35:59.038082       1 service.go:421] Adding new service port \"volume-expand-3529-148/csi-hostpathplugin:dummy\" at 100.70.215.250:12345/TCP\nI1007 16:35:59.038119       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:59.076125       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.037319ms\"\nI1007 16:35:59.076370       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:59.135655       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.41813ms\"\nI1007 16:35:59.510437       1 service.go:306] Service proxy-4616/proxy-service-bc8j8 updated: 4 ports\nI1007 16:36:00.136802       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:portname1\" at 100.67.38.141:80/TCP\nI1007 16:36:00.136860       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:portname2\" at 100.67.38.141:81/TCP\nI1007 16:36:00.136872       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:tlsportname1\" at 100.67.38.141:443/TCP\nI1007 16:36:00.136891       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:tlsportname2\" at 100.67.38.141:444/TCP\nI1007 16:36:00.136941       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:00.179757       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.004291ms\"\nI1007 16:36:05.126339       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:05.181938       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.655672ms\"\nI1007 16:36:07.121743       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:07.309405       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"188.322667ms\"\nI1007 16:36:10.320436       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:10.381751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.364656ms\"\nI1007 16:36:11.317615       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:11.363316       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.448788ms\"\nI1007 16:36:11.952968       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:11.986119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.195732ms\"\nI1007 16:36:13.686972       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:13.735659       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.664853ms\"\nI1007 16:36:19.109275       1 service.go:306] Service ephemeral-9076-5159/csi-hostpathplugin updated: 1 ports\nI1007 16:36:19.109317       1 service.go:421] Adding new service port \"ephemeral-9076-5159/csi-hostpathplugin:dummy\" at 100.66.40.13:12345/TCP\nI1007 16:36:19.109370       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:19.154475       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.145911ms\"\nI1007 16:36:19.154557       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:19.192194       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.675521ms\"\nI1007 16:36:19.970182       1 service.go:306] Service proxy-4616/proxy-service-bc8j8 updated: 0 ports\nI1007 16:36:20.192342       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:tlsportname1\"\nI1007 16:36:20.192408       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:tlsportname2\"\nI1007 16:36:20.192418       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:portname1\"\nI1007 16:36:20.192424       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:portname2\"\nI1007 16:36:20.192480       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:20.230438       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.096825ms\"\nI1007 16:36:26.523570       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:26.557019       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.500648ms\"\nI1007 16:36:28.876435       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:28.941060       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.646174ms\"\nI1007 16:36:33.349907       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:33.401057       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.197074ms\"\nI1007 16:36:34.329795       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:34.369667       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.91646ms\"\nI1007 16:36:35.343752       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:35.415847       1 service.go:306] Service volume-expand-1606-6948/csi-hostpathplugin updated: 1 ports\nI1007 16:36:35.420245       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.544393ms\"\nI1007 16:36:35.420284       1 service.go:421] Adding new service port \"volume-expand-1606-6948/csi-hostpathplugin:dummy\" at 100.71.176.33:12345/TCP\nI1007 16:36:35.420490       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:35.508365       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.079484ms\"\nI1007 16:36:36.508842       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:36.565778       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.013933ms\"\nI1007 16:36:40.903346       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:40.958946       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.648705ms\"\nI1007 16:36:45.850520       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:45.889510       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.045325ms\"\nI1007 16:36:46.636646       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:46.736010       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.402251ms\"\nI1007 16:36:54.446116       1 service.go:306] Service provisioning-7539-5141/csi-hostpathplugin updated: 0 ports\nI1007 16:36:54.446154       1 service.go:446] Removing service port \"provisioning-7539-5141/csi-hostpathplugin:dummy\"\nI1007 16:36:54.446207       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:54.481314       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.149975ms\"\nI1007 16:36:54.485750       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:54.536025       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.30435ms\"\nI1007 16:36:55.143600       1 service.go:306] Service svc-latency-3095/latency-svc-s6cc6 updated: 1 ports\nI1007 16:36:55.317772       1 service.go:306] Service svc-latency-3095/latency-svc-9kfjx updated: 1 ports\nI1007 16:36:55.328732       1 service.go:306] Service svc-latency-3095/latency-svc-jx2gg updated: 1 ports\nI1007 16:36:55.341161       1 service.go:306] Service svc-latency-3095/latency-svc-l6t5w updated: 1 ports\nI1007 16:36:55.346494       1 service.go:306] Service svc-latency-3095/latency-svc-t2lf5 updated: 1 ports\nI1007 16:36:55.353532       1 service.go:306] Service svc-latency-3095/latency-svc-bq96v updated: 1 ports\nI1007 16:36:55.446122       1 service.go:306] Service svc-latency-3095/latency-svc-ff565 updated: 1 ports\nI1007 16:36:55.446300       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ff565\" at 100.66.213.245:80/TCP\nI1007 16:36:55.446326       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-s6cc6\" at 100.65.187.89:80/TCP\nI1007 16:36:55.446339       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9kfjx\" at 100.65.151.141:80/TCP\nI1007 16:36:55.446386       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jx2gg\" at 100.69.102.108:80/TCP\nI1007 16:36:55.446417       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-l6t5w\" at 100.69.252.65:80/TCP\nI1007 16:36:55.446470       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-t2lf5\" at 100.64.94.141:80/TCP\nI1007 16:36:55.446506       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bq96v\" at 100.65.93.66:80/TCP\nI1007 16:36:55.446664       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:55.456942       1 service.go:306] Service svc-latency-3095/latency-svc-nnsdl updated: 1 ports\nI1007 16:36:55.459558       1 service.go:306] Service svc-latency-3095/latency-svc-gbf5h updated: 1 ports\nI1007 16:36:55.482138       1 service.go:306] Service svc-latency-3095/latency-svc-z2zbj updated: 1 ports\nI1007 16:36:55.487904       1 service.go:306] Service svc-latency-3095/latency-svc-dsjks updated: 1 ports\nI1007 16:36:55.497097       1 service.go:306] Service svc-latency-3095/latency-svc-5trpp updated: 1 ports\nI1007 16:36:55.514676       1 service.go:306] Service svc-latency-3095/latency-svc-g5qn4 updated: 1 ports\nI1007 16:36:55.532382       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.205706ms\"\nI1007 16:36:55.538627       1 service.go:306] Service svc-latency-3095/latency-svc-qcstz updated: 1 ports\nI1007 16:36:55.544773       1 service.go:306] Service svc-latency-3095/latency-svc-4rpbj updated: 1 ports\nI1007 16:36:55.555620       1 service.go:306] Service svc-latency-3095/latency-svc-b87kg updated: 1 ports\nI1007 16:36:55.567240       1 service.go:306] Service svc-latency-3095/latency-svc-xkxhn updated: 1 ports\nI1007 16:36:55.575517       1 service.go:306] Service svc-latency-3095/latency-svc-t8hsl updated: 1 ports\nI1007 16:36:55.581711       1 service.go:306] Service svc-latency-3095/latency-svc-bx4kx updated: 1 ports\nI1007 16:36:55.594135       1 service.go:306] Service svc-latency-3095/latency-svc-4cv7j updated: 1 ports\nI1007 16:36:55.598016       1 service.go:306] Service svc-latency-3095/latency-svc-pnlg5 updated: 1 ports\nI1007 16:36:55.607523       1 service.go:306] Service svc-latency-3095/latency-svc-qt6bg updated: 1 ports\nI1007 16:36:55.621103       1 service.go:306] Service svc-latency-3095/latency-svc-98tvv updated: 1 ports\nI1007 16:36:55.627153       1 service.go:306] Service svc-latency-3095/latency-svc-gwngf updated: 1 ports\nI1007 16:36:55.641950       1 service.go:306] Service svc-latency-3095/latency-svc-zvn2p updated: 1 ports\nI1007 16:36:55.652014       1 service.go:306] Service svc-latency-3095/latency-svc-hm4s8 updated: 1 ports\nI1007 16:36:55.675711       1 service.go:306] Service svc-latency-3095/latency-svc-6ldx5 updated: 1 ports\nI1007 16:36:55.683673       1 service.go:306] Service svc-latency-3095/latency-svc-q68gs updated: 1 ports\nI1007 16:36:55.697262       1 service.go:306] Service svc-latency-3095/latency-svc-w6pjg updated: 1 ports\nI1007 16:36:55.706985       1 service.go:306] Service svc-latency-3095/latency-svc-rddgn updated: 1 ports\nI1007 16:36:55.748434       1 service.go:306] Service svc-latency-3095/latency-svc-ctvfm updated: 1 ports\nI1007 16:36:55.784760       1 service.go:306] Service svc-latency-3095/latency-svc-c44lg updated: 1 ports\nI1007 16:36:55.817997       1 service.go:306] Service svc-latency-3095/latency-svc-qpktt updated: 1 ports\nI1007 16:36:55.831793       1 service.go:306] Service svc-latency-3095/latency-svc-fljkf updated: 1 ports\nI1007 16:36:55.862555       1 service.go:306] Service svc-latency-3095/latency-svc-mdv9v updated: 1 ports\nI1007 16:36:55.882771       1 service.go:306] Service svc-latency-3095/latency-svc-8kddc updated: 1 ports\nI1007 16:36:55.907127       1 service.go:306] Service svc-latency-3095/latency-svc-d5zn8 updated: 1 ports\nI1007 16:36:55.914392       1 service.go:306] Service svc-latency-3095/latency-svc-9kf42 updated: 1 ports\nI1007 16:36:55.919990       1 service.go:306] Service svc-latency-3095/latency-svc-rmbrm updated: 1 ports\nI1007 16:36:55.932228       1 service.go:306] Service svc-latency-3095/latency-svc-87x29 updated: 1 ports\nI1007 16:36:55.953558       1 service.go:306] Service svc-latency-3095/latency-svc-kf28m updated: 1 ports\nI1007 16:36:55.965054       1 service.go:306] Service svc-latency-3095/latency-svc-4fhml updated: 1 ports\nI1007 16:36:55.971387       1 service.go:306] Service svc-latency-3095/latency-svc-jwjg7 updated: 1 ports\nI1007 16:36:55.984150       1 service.go:306] Service svc-latency-3095/latency-svc-r44kh updated: 1 ports\nI1007 16:36:55.994162       1 service.go:306] Service svc-latency-3095/latency-svc-8vhnw updated: 1 ports\nI1007 16:36:56.005007       1 service.go:306] Service svc-latency-3095/latency-svc-bbvgj updated: 1 ports\nI1007 16:36:56.009800       1 service.go:306] Service svc-latency-3095/latency-svc-pgdl8 updated: 1 ports\nI1007 16:36:56.011126       1 service.go:306] Service svc-latency-3095/latency-svc-2hlbc updated: 1 ports\nI1007 16:36:56.026619       1 service.go:306] Service svc-latency-3095/latency-svc-mlr6d updated: 1 ports\nI1007 16:36:56.041819       1 service.go:306] Service svc-latency-3095/latency-svc-mxnnz updated: 1 ports\nI1007 16:36:56.051464       1 service.go:306] Service svc-latency-3095/latency-svc-4t54c updated: 1 ports\nI1007 16:36:56.069125       1 service.go:306] Service svc-latency-3095/latency-svc-7bkhm updated: 1 ports\nI1007 16:36:56.071022       1 service.go:306] Service svc-latency-3095/latency-svc-kkldd updated: 1 ports\nI1007 16:36:56.082502       1 service.go:306] Service svc-latency-3095/latency-svc-9442f updated: 1 ports\nI1007 16:36:56.094314       1 service.go:306] Service svc-latency-3095/latency-svc-fzzx6 updated: 1 ports\nI1007 16:36:56.104442       1 service.go:306] Service svc-latency-3095/latency-svc-fl9bx updated: 1 ports\nI1007 16:36:56.143578       1 service.go:306] Service svc-latency-3095/latency-svc-fdk5t updated: 1 ports\nI1007 16:36:56.148760       1 service.go:306] Service svc-latency-3095/latency-svc-gpfnm updated: 1 ports\nI1007 16:36:56.151528       1 service.go:306] Service svc-latency-3095/latency-svc-lr28t updated: 1 ports\nI1007 16:36:56.180596       1 service.go:306] Service svc-latency-3095/latency-svc-wn7m9 updated: 1 ports\nI1007 16:36:56.232114       1 service.go:306] Service svc-latency-3095/latency-svc-kpg48 updated: 1 ports\nI1007 16:36:56.289963       1 service.go:306] Service svc-latency-3095/latency-svc-b6dh9 updated: 1 ports\nI1007 16:36:56.321967       1 service.go:306] Service svc-latency-3095/latency-svc-7qzx2 updated: 1 ports\nI1007 16:36:56.381886       1 service.go:306] Service svc-latency-3095/latency-svc-4vvjq updated: 1 ports\nI1007 16:36:56.425234       1 service.go:306] Service svc-latency-3095/latency-svc-7dbpk updated: 1 ports\nI1007 16:36:56.477654       1 service.go:306] Service svc-latency-3095/latency-svc-lgr2h updated: 1 ports\nI1007 16:36:56.477931       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kpg48\" at 100.69.47.63:80/TCP\nI1007 16:36:56.478308       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-t8hsl\" at 100.69.244.232:80/TCP\nI1007 16:36:56.478404       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bx4kx\" at 100.67.65.65:80/TCP\nI1007 16:36:56.478424       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-pnlg5\" at 100.71.91.21:80/TCP\nI1007 16:36:56.478435       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qt6bg\" at 100.68.78.57:80/TCP\nI1007 16:36:56.478445       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gwngf\" at 100.71.153.250:80/TCP\nI1007 16:36:56.478465       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qpktt\" at 100.64.3.107:80/TCP\nI1007 16:36:56.478482       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7bkhm\" at 100.70.130.73:80/TCP\nI1007 16:36:56.478506       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4rpbj\" at 100.69.217.56:80/TCP\nI1007 16:36:56.478522       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rddgn\" at 100.67.77.164:80/TCP\nI1007 16:36:56.478535       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8kddc\" at 100.69.153.63:80/TCP\nI1007 16:36:56.478551       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mxnnz\" at 100.68.142.195:80/TCP\nI1007 16:36:56.478567       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-2hlbc\" at 100.68.248.196:80/TCP\nI1007 16:36:56.478581       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fdk5t\" at 100.65.14.48:80/TCP\nI1007 16:36:56.478592       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7dbpk\" at 100.67.29.123:80/TCP\nI1007 16:36:56.478606       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mdv9v\" at 100.67.196.212:80/TCP\nI1007 16:36:56.478622       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4vvjq\" at 100.64.244.48:80/TCP\nI1007 16:36:56.478636       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-xkxhn\" at 100.71.144.70:80/TCP\nI1007 16:36:56.478651       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4fhml\" at 100.70.155.17:80/TCP\nI1007 16:36:56.478665       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kkldd\" at 100.67.73.39:80/TCP\nI1007 16:36:56.478675       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gpfnm\" at 100.70.17.215:80/TCP\nI1007 16:36:56.478687       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qcstz\" at 100.66.76.0:80/TCP\nI1007 16:36:56.478700       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-q68gs\" at 100.71.139.124:80/TCP\nI1007 16:36:56.478714       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kf28m\" at 100.70.154.217:80/TCP\nI1007 16:36:56.478729       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bbvgj\" at 100.64.44.209:80/TCP\nI1007 16:36:56.478742       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-pgdl8\" at 100.68.48.65:80/TCP\nI1007 16:36:56.478752       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4t54c\" at 100.66.238.170:80/TCP\nI1007 16:36:56.478765       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-g5qn4\" at 100.71.82.170:80/TCP\nI1007 16:36:56.478781       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jwjg7\" at 100.64.161.15:80/TCP\nI1007 16:36:56.478796       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5trpp\" at 100.68.122.239:80/TCP\nI1007 16:36:56.478815       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-zvn2p\" at 100.64.192.254:80/TCP\nI1007 16:36:56.478830       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9442f\" at 100.68.12.230:80/TCP\nI1007 16:36:56.478840       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fzzx6\" at 100.66.182.249:80/TCP\nI1007 16:36:56.478856       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fl9bx\" at 100.71.192.209:80/TCP\nI1007 16:36:56.478881       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-nnsdl\" at 100.65.50.171:80/TCP\nI1007 16:36:56.478896       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-87x29\" at 100.69.127.228:80/TCP\nI1007 16:36:56.478907       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-dsjks\" at 100.67.233.63:80/TCP\nI1007 16:36:56.478917       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b87kg\" at 100.68.28.8:80/TCP\nI1007 16:36:56.478928       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4cv7j\" at 100.71.213.21:80/TCP\nI1007 16:36:56.478939       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-w6pjg\" at 100.67.106.72:80/TCP\nI1007 16:36:56.478952       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-lr28t\" at 100.68.61.81:80/TCP\nI1007 16:36:56.478964       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fljkf\" at 100.68.153.187:80/TCP\nI1007 16:36:56.478979       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mlr6d\" at 100.67.80.45:80/TCP\nI1007 16:36:56.478993       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7qzx2\" at 100.70.249.235:80/TCP\nI1007 16:36:56.479004       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gbf5h\" at 100.67.134.122:80/TCP\nI1007 16:36:56.479015       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-6ldx5\" at 100.68.50.164:80/TCP\nI1007 16:36:56.479028       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ctvfm\" at 100.69.38.232:80/TCP\nI1007 16:36:56.479043       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-d5zn8\" at 100.65.216.8:80/TCP\nI1007 16:36:56.479058       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rmbrm\" at 100.67.179.134:80/TCP\nI1007 16:36:56.479072       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-r44kh\" at 100.66.80.196:80/TCP\nI1007 16:36:56.479082       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8vhnw\" at 100.64.67.228:80/TCP\nI1007 16:36:56.479092       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-lgr2h\" at 100.68.214.167:80/TCP\nI1007 16:36:56.479105       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-z2zbj\" at 100.65.72.93:80/TCP\nI1007 16:36:56.479119       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-98tvv\" at 100.69.111.126:80/TCP\nI1007 16:36:56.479135       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-hm4s8\" at 100.67.246.215:80/TCP\nI1007 16:36:56.479150       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-c44lg\" at 100.64.21.66:80/TCP\nI1007 16:36:56.479160       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9kf42\" at 100.66.172.211:80/TCP\nI1007 16:36:56.479170       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wn7m9\" at 100.64.21.191:80/TCP\nI1007 16:36:56.479181       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b6dh9\" at 100.71.251.62:80/TCP\nI1007 16:36:56.479696       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:56.526233       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.309849ms\"\nI1007 16:36:56.527264       1 service.go:306] Service svc-latency-3095/latency-svc-f6hft updated: 1 ports\nI1007 16:36:56.580894       1 service.go:306] Service svc-latency-3095/latency-svc-bwf5g updated: 1 ports\nI1007 16:36:56.625292       1 service.go:306] Service svc-latency-3095/latency-svc-z6sb8 updated: 1 ports\nI1007 16:36:56.675884       1 service.go:306] Service svc-latency-3095/latency-svc-5cnrd updated: 1 ports\nI1007 16:36:56.742644       1 service.go:306] Service svc-latency-3095/latency-svc-5fgxs updated: 1 ports\nI1007 16:36:56.784905       1 service.go:306] Service svc-latency-3095/latency-svc-qbzj7 updated: 1 ports\nI1007 16:36:56.828363       1 service.go:306] Service svc-latency-3095/latency-svc-6c4j6 updated: 1 ports\nI1007 16:36:56.874583       1 service.go:306] Service svc-latency-3095/latency-svc-m8mpl updated: 1 ports\nI1007 16:36:56.935509       1 service.go:306] Service svc-latency-3095/latency-svc-9f5wr updated: 1 ports\nI1007 16:36:56.980645       1 service.go:306] Service svc-latency-3095/latency-svc-psrmb updated: 1 ports\nI1007 16:36:57.024331       1 service.go:306] Service svc-latency-3095/latency-svc-cvhcr updated: 1 ports\nI1007 16:36:57.086944       1 service.go:306] Service svc-latency-3095/latency-svc-7vbmf updated: 1 ports\nI1007 16:36:57.132085       1 service.go:306] Service svc-latency-3095/latency-svc-w26cn updated: 1 ports\nI1007 16:36:57.172630       1 service.go:306] Service svc-latency-3095/latency-svc-v929g updated: 1 ports\nI1007 16:36:57.232961       1 service.go:306] Service svc-latency-3095/latency-svc-kxdkg updated: 1 ports\nI1007 16:36:57.290463       1 service.go:306] Service svc-latency-3095/latency-svc-xhbbx updated: 1 ports\nI1007 16:36:57.324552       1 service.go:306] Service svc-latency-3095/latency-svc-n2kdw updated: 1 ports\nI1007 16:36:57.377609       1 service.go:306] Service svc-latency-3095/latency-svc-54bfc updated: 1 ports\nI1007 16:36:57.436034       1 service.go:306] Service svc-latency-3095/latency-svc-7bxpc updated: 1 ports\nI1007 16:36:57.474184       1 service.go:306] Service svc-latency-3095/latency-svc-frh6j updated: 1 ports\nI1007 16:36:57.474238       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5cnrd\" at 100.65.246.96:80/TCP\nI1007 16:36:57.474258       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kxdkg\" at 100.64.167.168:80/TCP\nI1007 16:36:57.474272       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7bxpc\" at 100.69.96.199:80/TCP\nI1007 16:36:57.474316       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-f6hft\" at 100.68.227.10:80/TCP\nI1007 16:36:57.474341       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bwf5g\" at 100.65.47.149:80/TCP\nI1007 16:36:57.474362       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7vbmf\" at 100.65.148.176:80/TCP\nI1007 16:36:57.474421       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-n2kdw\" at 100.68.40.188:80/TCP\nI1007 16:36:57.474472       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qbzj7\" at 100.69.219.179:80/TCP\nI1007 16:36:57.474546       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-psrmb\" at 100.69.178.253:80/TCP\nI1007 16:36:57.474638       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-v929g\" at 100.66.13.84:80/TCP\nI1007 16:36:57.474725       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-xhbbx\" at 100.67.108.26:80/TCP\nI1007 16:36:57.474823       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-54bfc\" at 100.71.185.166:80/TCP\nI1007 16:36:57.474928       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-frh6j\" at 100.67.80.37:80/TCP\nI1007 16:36:57.474987       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-w26cn\" at 100.65.182.28:80/TCP\nI1007 16:36:57.475046       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-z6sb8\" at 100.68.41.32:80/TCP\nI1007 16:36:57.475076       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5fgxs\" at 100.69.163.166:80/TCP\nI1007 16:36:57.475115       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-6c4j6\" at 100.68.53.18:80/TCP\nI1007 16:36:57.475135       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-m8mpl\" at 100.68.171.105:80/TCP\nI1007 16:36:57.475166       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9f5wr\" at 100.69.153.238:80/TCP\nI1007 16:36:57.475196       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-cvhcr\" at 100.71.202.228:80/TCP\nI1007 16:36:57.475531       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:57.529966       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.724109ms\"\nI1007 16:36:57.533082       1 service.go:306] Service svc-latency-3095/latency-svc-nj8wz updated: 1 ports\nI1007 16:36:57.573927       1 service.go:306] Service svc-latency-3095/latency-svc-ppwwt updated: 1 ports\nI1007 16:36:57.622221       1 service.go:306] Service svc-latency-3095/latency-svc-hnpw7 updated: 1 ports\nI1007 16:36:57.682186       1 service.go:306] Service svc-latency-3095/latency-svc-ztbx7 updated: 1 ports\nI1007 16:36:57.729533       1 service.go:306] Service svc-latency-3095/latency-svc-bls6s updated: 1 ports\nI1007 16:36:57.778221       1 service.go:306] Service svc-latency-3095/latency-svc-qtsrz updated: 1 ports\nI1007 16:36:57.825651       1 service.go:306] Service svc-latency-3095/latency-svc-h4dtg updated: 1 ports\nI1007 16:36:57.881692       1 service.go:306] Service svc-latency-3095/latency-svc-gwt6k updated: 1 ports\nI1007 16:36:57.927720       1 service.go:306] Service svc-latency-3095/latency-svc-mg7xm updated: 1 ports\nI1007 16:36:57.975818       1 service.go:306] Service svc-latency-3095/latency-svc-75nss updated: 1 ports\nI1007 16:36:58.037537       1 service.go:306] Service svc-latency-3095/latency-svc-b6p8n updated: 1 ports\nI1007 16:36:58.077188       1 service.go:306] Service svc-latency-3095/latency-svc-8msj6 updated: 1 ports\nI1007 16:36:58.130243       1 service.go:306] Service svc-latency-3095/latency-svc-wqtgd updated: 1 ports\nI1007 16:36:58.177198       1 service.go:306] Service svc-latency-3095/latency-svc-5ml24 updated: 1 ports\nI1007 16:36:58.232600       1 service.go:306] Service svc-latency-3095/latency-svc-jl5rj updated: 1 ports\nI1007 16:36:58.276360       1 service.go:306] Service svc-latency-3095/latency-svc-f4n9c updated: 1 ports\nI1007 16:36:58.334975       1 service.go:306] Service svc-latency-3095/latency-svc-k6gtd updated: 1 ports\nI1007 16:36:58.380763       1 service.go:306] Service svc-latency-3095/latency-svc-4vj22 updated: 1 ports\nI1007 16:36:58.439559       1 service.go:306] Service svc-latency-3095/latency-svc-jc8j5 updated: 1 ports\nI1007 16:36:58.476418       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jc8j5\" at 100.69.83.18:80/TCP\nI1007 16:36:58.476550       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mg7xm\" at 100.65.185.24:80/TCP\nI1007 16:36:58.476573       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b6p8n\" at 100.66.35.241:80/TCP\nI1007 16:36:58.476585       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4vj22\" at 100.69.123.160:80/TCP\nI1007 16:36:58.476601       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-hnpw7\" at 100.68.2.24:80/TCP\nI1007 16:36:58.476616       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ztbx7\" at 100.67.177.28:80/TCP\nI1007 16:36:58.476632       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qtsrz\" at 100.67.83.212:80/TCP\nI1007 16:36:58.476668       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gwt6k\" at 100.71.105.225:80/TCP\nI1007 16:36:58.476687       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jl5rj\" at 100.67.179.50:80/TCP\nI1007 16:36:58.476704       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-k6gtd\" at 100.69.17.62:80/TCP\nI1007 16:36:58.476722       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-h4dtg\" at 100.69.22.130:80/TCP\nI1007 16:36:58.476742       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-75nss\" at 100.67.235.0:80/TCP\nI1007 16:36:58.476758       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wqtgd\" at 100.70.168.71:80/TCP\nI1007 16:36:58.476777       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5ml24\" at 100.67.187.150:80/TCP\nI1007 16:36:58.476792       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-f4n9c\" at 100.70.222.108:80/TCP\nI1007 16:36:58.476804       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-nj8wz\" at 100.64.114.16:80/TCP\nI1007 16:36:58.476840       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ppwwt\" at 100.69.250.54:80/TCP\nI1007 16:36:58.476859       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bls6s\" at 100.71.63.51:80/TCP\nI1007 16:36:58.476873       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8msj6\" at 100.68.221.52:80/TCP\nI1007 16:36:58.477509       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:58.489067       1 service.go:306] Service svc-latency-3095/latency-svc-xrq6s updated: 1 ports\nI1007 16:36:58.547874       1 service.go:306] Service svc-latency-3095/latency-svc-h7d26 updated: 1 ports\nI1007 16:36:58.564456       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.044099ms\"\nI1007 16:36:58.587209       1 service.go:306] Service svc-latency-3095/latency-svc-vbzgw updated: 1 ports\nI1007 16:36:58.629255       1 service.go:306] Service svc-latency-3095/latency-svc-hmsbs updated: 1 ports\nI1007 16:36:58.691836       1 service.go:306] Service svc-latency-3095/latency-svc-dm87n updated: 1 ports\nI1007 16:36:58.739061       1 service.go:306] Service svc-latency-3095/latency-svc-bbzmr updated: 1 ports\nI1007 16:36:58.779032       1 service.go:306] Service svc-latency-3095/latency-svc-29zjj updated: 1 ports\nI1007 16:36:58.823184       1 service.go:306] Service svc-latency-3095/latency-svc-c7lkz updated: 1 ports\nI1007 16:36:58.883107       1 service.go:306] Service svc-latency-3095/latency-svc-c5npb updated: 1 ports\nI1007 16:36:58.950929       1 service.go:306] Service svc-latency-3095/latency-svc-9nct6 updated: 1 ports\nI1007 16:36:58.984977       1 service.go:306] Service svc-latency-3095/latency-svc-xv69n updated: 1 ports\nI1007 16:36:59.035575       1 service.go:306] Service svc-latency-3095/latency-svc-p68ff updated: 1 ports\nI1007 16:36:59.086536       1 service.go:306] Service svc-latency-3095/latency-svc-lxrsz updated: 1 ports\nI1007 16:36:59.138245       1 service.go:306] Service svc-latency-3095/latency-svc-kqxdp updated: 1 ports\nI1007 16:36:59.179088       1 service.go:306] Service svc-latency-3095/latency-svc-cmd6h updated: 1 ports\nI1007 16:36:59.236455       1 service.go:306] Service svc-latency-3095/latency-svc-gllh2 updated: 1 ports\nI1007 16:36:59.282611       1 service.go:306] Service svc-latency-3095/latency-svc-z75hs updated: 1 ports\nI1007 16:36:59.326234       1 service.go:306] Service svc-latency-3095/latency-svc-22x7h updated: 1 ports\nI1007 16:36:59.380315       1 service.go:306] Service svc-latency-3095/latency-svc-b6w7q updated: 1 ports\nI1007 16:36:59.451341       1 service.go:306] Service svc-latency-3095/latency-svc-8lr6f updated: 1 ports\nI1007 16:36:59.451389       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bbzmr\" at 100.68.52.224:80/TCP\nI1007 16:36:59.451406       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-29zjj\" at 100.65.197.10:80/TCP\nI1007 16:36:59.451415       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9nct6\" at 100.67.95.153:80/TCP\nI1007 16:36:59.451424       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-p68ff\" at 100.64.165.208:80/TCP\nI1007 16:36:59.451435       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kqxdp\" at 100.65.12.86:80/TCP\nI1007 16:36:59.451443       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-22x7h\" at 100.64.180.92:80/TCP\nI1007 16:36:59.451454       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-h7d26\" at 100.66.39.198:80/TCP\nI1007 16:36:59.451464       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-vbzgw\" at 100.65.191.219:80/TCP\nI1007 16:36:59.451473       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b6w7q\" at 100.71.241.79:80/TCP\nI1007 16:36:59.451482       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gllh2\" at 100.70.111.177:80/TCP\nI1007 16:36:59.451491       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-z75hs\" at 100.70.58.196:80/TCP\nI1007 16:36:59.451499       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8lr6f\" at 100.68.53.57:80/TCP\nI1007 16:36:59.451508       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-c5npb\" at 100.67.58.128:80/TCP\nI1007 16:36:59.451517       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-lxrsz\" at 100.71.39.215:80/TCP\nI1007 16:36:59.451527       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-c7lkz\" at 100.65.110.189:80/TCP\nI1007 16:36:59.451537       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-xv69n\" at 100.65.154.30:80/TCP\nI1007 16:36:59.451546       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-cmd6h\" at 100.68.248.168:80/TCP\nI1007 16:36:59.451561       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-xrq6s\" at 100.71.70.182:80/TCP\nI1007 16:36:59.451570       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-hmsbs\" at 100.69.213.140:80/TCP\nI1007 16:36:59.451581       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-dm87n\" at 100.67.8.93:80/TCP\nI1007 16:36:59.452105       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:59.506938       1 service.go:306] Service svc-latency-3095/latency-svc-h6b2q updated: 1 ports\nI1007 16:36:59.516081       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.688399ms\"\nI1007 16:36:59.529898       1 service.go:306] Service svc-latency-3095/latency-svc-wtczd updated: 1 ports\nI1007 16:36:59.577490       1 service.go:306] Service svc-latency-3095/latency-svc-k5wlj updated: 1 ports\nI1007 16:36:59.632107       1 service.go:306] Service svc-latency-3095/latency-svc-jlrcx updated: 1 ports\nI1007 16:36:59.681619       1 service.go:306] Service svc-latency-3095/latency-svc-d2wzc updated: 1 ports\nI1007 16:36:59.734030       1 service.go:306] Service svc-latency-3095/latency-svc-4fjzm updated: 1 ports\nI1007 16:36:59.827654       1 service.go:306] Service svc-latency-3095/latency-svc-z6jb8 updated: 1 ports\nI1007 16:36:59.904940       1 service.go:306] Service svc-latency-3095/latency-svc-ldxll updated: 1 ports\nI1007 16:36:59.970155       1 service.go:306] Service svc-latency-3095/latency-svc-56lbz updated: 1 ports\nI1007 16:37:00.021875       1 service.go:306] Service svc-latency-3095/latency-svc-9ltd2 updated: 1 ports\nI1007 16:37:00.051269       1 service.go:306] Service svc-latency-3095/latency-svc-fccbn updated: 1 ports\nI1007 16:37:00.140000       1 service.go:306] Service svc-latency-3095/latency-svc-5qcv8 updated: 1 ports\nI1007 16:37:00.183759       1 service.go:306] Service svc-latency-3095/latency-svc-cv7zk updated: 1 ports\nI1007 16:37:00.217838       1 service.go:306] Service svc-latency-3095/latency-svc-gccbm updated: 1 ports\nI1007 16:37:00.288587       1 service.go:306] Service svc-latency-3095/latency-svc-wbqq9 updated: 1 ports\nI1007 16:37:00.294324       1 service.go:306] Service svc-latency-3095/latency-svc-s22cz updated: 1 ports\nI1007 16:37:00.340311       1 service.go:306] Service svc-latency-3095/latency-svc-w4bs6 updated: 1 ports\nI1007 16:37:00.403548       1 service.go:306] Service svc-latency-3095/latency-svc-f5hjw updated: 1 ports\nI1007 16:37:00.442197       1 service.go:306] Service svc-latency-3095/latency-svc-h6c5s updated: 1 ports\nI1007 16:37:00.477140       1 service.go:306] Service svc-latency-3095/latency-svc-tcwds updated: 1 ports\nI1007 16:37:00.477183       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gccbm\" at 100.70.2.190:80/TCP\nI1007 16:37:00.477222       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wbqq9\" at 100.69.62.250:80/TCP\nI1007 16:37:00.477240       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-w4bs6\" at 100.70.38.139:80/TCP\nI1007 16:37:00.477255       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-h6b2q\" at 100.69.142.96:80/TCP\nI1007 16:37:00.477269       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-k5wlj\" at 100.65.232.181:80/TCP\nI1007 16:37:00.477335       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-z6jb8\" at 100.67.208.127:80/TCP\nI1007 16:37:00.477344       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ldxll\" at 100.65.32.111:80/TCP\nI1007 16:37:00.477354       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fccbn\" at 100.67.253.142:80/TCP\nI1007 16:37:00.477366       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-f5hjw\" at 100.67.97.27:80/TCP\nI1007 16:37:00.477380       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-h6c5s\" at 100.69.220.150:80/TCP\nI1007 16:37:00.477394       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-tcwds\" at 100.70.90.148:80/TCP\nI1007 16:37:00.477408       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wtczd\" at 100.69.176.86:80/TCP\nI1007 16:37:00.477419       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-d2wzc\" at 100.65.171.156:80/TCP\nI1007 16:37:00.477432       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-56lbz\" at 100.70.113.188:80/TCP\nI1007 16:37:00.477447       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-s22cz\" at 100.64.168.202:80/TCP\nI1007 16:37:00.477460       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jlrcx\" at 100.70.154.3:80/TCP\nI1007 16:37:00.477479       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4fjzm\" at 100.68.159.188:80/TCP\nI1007 16:37:00.477491       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9ltd2\" at 100.68.43.30:80/TCP\nI1007 16:37:00.477511       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5qcv8\" at 100.69.64.168:80/TCP\nI1007 16:37:00.477523       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-cv7zk\" at 100.68.34.194:80/TCP\nI1007 16:37:00.478200       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:00.537241       1 service.go:306] Service svc-latency-3095/latency-svc-tslc2 updated: 1 ports\nI1007 16:37:00.561251       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.062693ms\"\nI1007 16:37:00.589115       1 service.go:306] Service svc-latency-3095/latency-svc-wtctt updated: 1 ports\nI1007 16:37:00.633127       1 service.go:306] Service svc-latency-3095/latency-svc-66lt9 updated: 1 ports\nI1007 16:37:00.685395       1 service.go:306] Service svc-latency-3095/latency-svc-vv22v updated: 1 ports\nI1007 16:37:00.737798       1 service.go:306] Service svc-latency-3095/latency-svc-mp5dk updated: 1 ports\nI1007 16:37:00.781060       1 service.go:306] Service svc-latency-3095/latency-svc-psg5k updated: 1 ports\nI1007 16:37:00.834105       1 service.go:306] Service svc-latency-3095/latency-svc-4hlfk updated: 1 ports\nI1007 16:37:00.889710       1 service.go:306] Service svc-latency-3095/latency-svc-k8g55 updated: 1 ports\nI1007 16:37:00.931619       1 service.go:306] Service svc-latency-3095/latency-svc-9xp2v updated: 1 ports\nI1007 16:37:00.982034       1 service.go:306] Service svc-latency-3095/latency-svc-7g7vf updated: 1 ports\nI1007 16:37:01.027017       1 service.go:306] Service svc-latency-3095/latency-svc-gxfhn updated: 1 ports\nI1007 16:37:01.087655       1 service.go:306] Service svc-latency-3095/latency-svc-59bkz updated: 1 ports\nI1007 16:37:01.125781       1 service.go:306] Service svc-latency-3095/latency-svc-kn9r7 updated: 1 ports\nI1007 16:37:01.185832       1 service.go:306] Service svc-latency-3095/latency-svc-rhv4z updated: 1 ports\nI1007 16:37:01.233713       1 service.go:306] Service svc-latency-3095/latency-svc-m6w6x updated: 1 ports\nI1007 16:37:01.285051       1 service.go:306] Service svc-latency-3095/latency-svc-s4dzf updated: 1 ports\nI1007 16:37:01.335346       1 service.go:306] Service svc-latency-3095/latency-svc-7qxlz updated: 1 ports\nI1007 16:37:01.388785       1 service.go:306] Service svc-latency-3095/latency-svc-zpzks updated: 1 ports\nI1007 16:37:01.431498       1 service.go:306] Service svc-latency-3095/latency-svc-s699j updated: 1 ports\nI1007 16:37:01.478038       1 service.go:306] Service svc-latency-3095/latency-svc-fkcz9 updated: 1 ports\nI1007 16:37:01.478096       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-psg5k\" at 100.65.218.50:80/TCP\nI1007 16:37:01.478114       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4hlfk\" at 100.64.24.118:80/TCP\nI1007 16:37:01.478126       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-k8g55\" at 100.68.159.59:80/TCP\nI1007 16:37:01.478138       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9xp2v\" at 100.69.202.137:80/TCP\nI1007 16:37:01.478151       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rhv4z\" at 100.68.134.216:80/TCP\nI1007 16:37:01.478164       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-s4dzf\" at 100.68.166.235:80/TCP\nI1007 16:37:01.478176       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7g7vf\" at 100.68.107.102:80/TCP\nI1007 16:37:01.478189       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-59bkz\" at 100.70.62.55:80/TCP\nI1007 16:37:01.478200       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kn9r7\" at 100.70.22.185:80/TCP\nI1007 16:37:01.478211       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-m6w6x\" at 100.65.150.247:80/TCP\nI1007 16:37:01.478222       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-tslc2\" at 100.67.208.182:80/TCP\nI1007 16:37:01.478237       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wtctt\" at 100.65.29.170:80/TCP\nI1007 16:37:01.478251       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-vv22v\" at 100.66.228.42:80/TCP\nI1007 16:37:01.478265       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gxfhn\" at 100.71.46.105:80/TCP\nI1007 16:37:01.478275       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-zpzks\" at 100.70.212.70:80/TCP\nI1007 16:37:01.478284       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-s699j\" at 100.66.166.91:80/TCP\nI1007 16:37:01.478295       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-66lt9\" at 100.67.8.197:80/TCP\nI1007 16:37:01.478309       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mp5dk\" at 100.66.153.98:80/TCP\nI1007 16:37:01.478323       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7qxlz\" at 100.68.221.90:80/TCP\nI1007 16:37:01.478337       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fkcz9\" at 100.69.88.218:80/TCP\nI1007 16:37:01.478899       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:01.528643       1 service.go:306] Service svc-latency-3095/latency-svc-wmvws updated: 1 ports\nI1007 16:37:01.542268       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.170832ms\"\nI1007 16:37:01.586063       1 service.go:306] Service svc-latency-3095/latency-svc-mf8w8 updated: 1 ports\nI1007 16:37:01.634311       1 service.go:306] Service svc-latency-3095/latency-svc-mvf29 updated: 1 ports\nI1007 16:37:01.684951       1 service.go:306] Service svc-latency-3095/latency-svc-hbpf7 updated: 1 ports\nI1007 16:37:01.728685       1 service.go:306] Service svc-latency-3095/latency-svc-5z94d updated: 1 ports\nI1007 16:37:01.785430       1 service.go:306] Service svc-latency-3095/latency-svc-47png updated: 1 ports\nI1007 16:37:01.859031       1 service.go:306] Service svc-latency-3095/latency-svc-gvr8j updated: 1 ports\nI1007 16:37:01.882563       1 service.go:306] Service svc-latency-3095/latency-svc-gsmwm updated: 1 ports\nI1007 16:37:01.929645       1 service.go:306] Service svc-latency-3095/latency-svc-xznls updated: 1 ports\nI1007 16:37:01.980877       1 service.go:306] Service svc-latency-3095/latency-svc-ktqvq updated: 1 ports\nI1007 16:37:02.035357       1 service.go:306] Service svc-latency-3095/latency-svc-f985x updated: 1 ports\nI1007 16:37:02.084045       1 service.go:306] Service svc-latency-3095/latency-svc-rlsbv updated: 1 ports\nI1007 16:37:02.128634       1 service.go:306] Service svc-latency-3095/latency-svc-8lhmv updated: 1 ports\nI1007 16:37:02.185914       1 service.go:306] Service svc-latency-3095/latency-svc-2hs7k updated: 1 ports\nI1007 16:37:02.227372       1 service.go:306] Service svc-latency-3095/latency-svc-2rfs8 updated: 1 ports\nI1007 16:37:02.279758       1 service.go:306] Service svc-latency-3095/latency-svc-mmps4 updated: 1 ports\nI1007 16:37:02.331317       1 service.go:306] Service svc-latency-3095/latency-svc-l9k59 updated: 1 ports\nI1007 16:37:02.434431       1 service.go:306] Service svc-latency-3095/latency-svc-tdxmn updated: 1 ports\nI1007 16:37:02.481557       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-47png\" at 100.66.72.167:80/TCP\nI1007 16:37:02.481586       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-xznls\" at 100.69.218.249:80/TCP\nI1007 16:37:02.481599       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-f985x\" at 100.71.214.10:80/TCP\nI1007 16:37:02.481608       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5z94d\" at 100.71.47.53:80/TCP\nI1007 16:37:02.481621       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-hbpf7\" at 100.68.142.20:80/TCP\nI1007 16:37:02.481631       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-2rfs8\" at 100.69.4.202:80/TCP\nI1007 16:37:02.481645       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wmvws\" at 100.69.197.47:80/TCP\nI1007 16:37:02.481658       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gsmwm\" at 100.67.76.31:80/TCP\nI1007 16:37:02.481669       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-2hs7k\" at 100.70.116.137:80/TCP\nI1007 16:37:02.481679       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-tdxmn\" at 100.70.224.8:80/TCP\nI1007 16:37:02.481689       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mvf29\" at 100.71.100.204:80/TCP\nI1007 16:37:02.481698       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gvr8j\" at 100.66.12.102:80/TCP\nI1007 16:37:02.481708       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ktqvq\" at 100.70.203.165:80/TCP\nI1007 16:37:02.481716       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rlsbv\" at 100.69.137.145:80/TCP\nI1007 16:37:02.481726       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8lhmv\" at 100.65.140.140:80/TCP\nI1007 16:37:02.481736       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mmps4\" at 100.71.149.209:80/TCP\nI1007 16:37:02.481744       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-l9k59\" at 100.64.152.21:80/TCP\nI1007 16:37:02.481755       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mf8w8\" at 100.65.132.70:80/TCP\nI1007 16:37:02.482257       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:02.492799       1 service.go:306] Service svc-latency-3095/latency-svc-pzmln updated: 1 ports\nI1007 16:37:02.538277       1 service.go:306] Service svc-latency-3095/latency-svc-bwmg6 updated: 1 ports\nI1007 16:37:02.588927       1 service.go:306] Service svc-latency-3095/latency-svc-c7wm9 updated: 1 ports\nI1007 16:37:02.621338       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"139.782599ms\"\nI1007 16:37:02.647521       1 service.go:306] Service svc-latency-3095/latency-svc-2cdqs updated: 1 ports\nI1007 16:37:02.682562       1 service.go:306] Service svc-latency-3095/latency-svc-kkd2v updated: 1 ports\nI1007 16:37:02.736623       1 service.go:306] Service svc-latency-3095/latency-svc-6ckxb updated: 1 ports\nI1007 16:37:02.782374       1 service.go:306] Service svc-latency-3095/latency-svc-szh7f updated: 1 ports\nI1007 16:37:02.836971       1 service.go:306] Service svc-latency-3095/latency-svc-kxrks updated: 1 ports\nI1007 16:37:02.882821       1 service.go:306] Service svc-latency-3095/latency-svc-fll55 updated: 1 ports\nI1007 16:37:03.124236       1 service.go:306] Service svc-latency-3095/latency-svc-zdgxl updated: 1 ports\nI1007 16:37:03.159941       1 service.go:306] Service svc-latency-3095/latency-svc-87xbm updated: 1 ports\nI1007 16:37:03.185928       1 service.go:306] Service svc-latency-3095/latency-svc-zzzr9 updated: 1 ports\nI1007 16:37:03.230762       1 service.go:306] Service svc-latency-3095/latency-svc-bwj8l updated: 1 ports\nI1007 16:37:03.312010       1 service.go:306] Service svc-latency-3095/latency-svc-4hd6t updated: 1 ports\nI1007 16:37:03.376078       1 service.go:306] Service svc-latency-3095/latency-svc-t6qzt updated: 1 ports\nI1007 16:37:03.404496       1 service.go:306] Service svc-latency-3095/latency-svc-5j87t updated: 1 ports\nI1007 16:37:03.423268       1 service.go:306] Service svc-latency-3095/latency-svc-99d25 updated: 1 ports\nI1007 16:37:03.446433       1 service.go:306] Service svc-latency-3095/latency-svc-jnzmw updated: 1 ports\nI1007 16:37:03.446531       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-c7wm9\" at 100.66.184.137:80/TCP\nI1007 16:37:03.446561       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-szh7f\" at 100.68.121.17:80/TCP\nI1007 16:37:03.446585       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-zdgxl\" at 100.69.181.139:80/TCP\nI1007 16:37:03.446609       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-87xbm\" at 100.67.155.133:80/TCP\nI1007 16:37:03.446657       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bwmg6\" at 100.67.214.80:80/TCP\nI1007 16:37:03.446702       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-5j87t\" at 100.70.44.232:80/TCP\nI1007 16:37:03.446757       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jnzmw\" at 100.70.58.215:80/TCP\nI1007 16:37:03.446779       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-t6qzt\" at 100.69.23.48:80/TCP\nI1007 16:37:03.446790       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-zzzr9\" at 100.64.104.235:80/TCP\nI1007 16:37:03.446801       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bwj8l\" at 100.64.209.179:80/TCP\nI1007 16:37:03.446831       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-4hd6t\" at 100.67.33.194:80/TCP\nI1007 16:37:03.446856       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-6ckxb\" at 100.66.29.195:80/TCP\nI1007 16:37:03.446879       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-2cdqs\" at 100.70.174.76:80/TCP\nI1007 16:37:03.446902       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kkd2v\" at 100.71.57.8:80/TCP\nI1007 16:37:03.446954       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kxrks\" at 100.64.178.231:80/TCP\nI1007 16:37:03.446973       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fll55\" at 100.66.38.5:80/TCP\nI1007 16:37:03.446986       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-99d25\" at 100.68.211.209:80/TCP\nI1007 16:37:03.447001       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-pzmln\" at 100.70.196.81:80/TCP\nI1007 16:37:03.447625       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:03.525810       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.269644ms\"\nI1007 16:37:04.526938       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:04.613355       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.16802ms\"\nI1007 16:37:09.463239       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:09.611528       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"149.081317ms\"\nI1007 16:37:09.612452       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:09.789577       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"178.001982ms\"\nI1007 16:37:10.491644       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:10.546122       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.631885ms\"\nI1007 16:37:10.646926       1 service.go:306] Service services-4131/nodeport-test updated: 1 ports\nI1007 16:37:10.930245       1 service.go:306] Service svc-latency-3095/latency-svc-22x7h updated: 0 ports\nI1007 16:37:10.968256       1 service.go:306] Service svc-latency-3095/latency-svc-29zjj updated: 0 ports\nI1007 16:37:11.054629       1 service.go:306] Service svc-latency-3095/latency-svc-2cdqs updated: 0 ports\nI1007 16:37:11.157008       1 service.go:306] Service svc-latency-3095/latency-svc-2hlbc updated: 0 ports\nI1007 16:37:11.229983       1 service.go:306] Service svc-latency-3095/latency-svc-2hs7k updated: 0 ports\nI1007 16:37:11.299318       1 service.go:306] Service svc-latency-3095/latency-svc-2rfs8 updated: 0 ports\nI1007 16:37:11.357460       1 service.go:306] Service svc-latency-3095/latency-svc-47png updated: 0 ports\nI1007 16:37:11.383564       1 service.go:306] Service svc-latency-3095/latency-svc-4cv7j updated: 0 ports\nI1007 16:37:11.427462       1 service.go:306] Service svc-latency-3095/latency-svc-4fhml updated: 0 ports\nI1007 16:37:11.441120       1 service.go:306] Service svc-latency-3095/latency-svc-4fjzm updated: 0 ports\nI1007 16:37:11.452188       1 service.go:306] Service svc-latency-3095/latency-svc-4hd6t updated: 0 ports\nI1007 16:37:11.460208       1 service.go:306] Service svc-latency-3095/latency-svc-4hlfk updated: 0 ports\nI1007 16:37:11.471787       1 service.go:306] Service svc-latency-3095/latency-svc-4rpbj updated: 0 ports\nI1007 16:37:11.471846       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-29zjj\"\nI1007 16:37:11.472039       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-2rfs8\"\nI1007 16:37:11.472062       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4hd6t\"\nI1007 16:37:11.472085       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4rpbj\"\nI1007 16:37:11.472098       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-22x7h\"\nI1007 16:37:11.472110       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-2hs7k\"\nI1007 16:37:11.472120       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4fhml\"\nI1007 16:37:11.472134       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4hlfk\"\nI1007 16:37:11.472160       1 service.go:421] Adding new service port \"services-4131/nodeport-test:http\" at 100.64.106.95:80/TCP\nI1007 16:37:11.472178       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-2cdqs\"\nI1007 16:37:11.472190       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-47png\"\nI1007 16:37:11.472202       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-2hlbc\"\nI1007 16:37:11.472214       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4cv7j\"\nI1007 16:37:11.472225       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4fjzm\"\nI1007 16:37:11.472434       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:11.483269       1 service.go:306] Service svc-latency-3095/latency-svc-4t54c updated: 0 ports\nI1007 16:37:11.499542       1 service.go:306] Service svc-latency-3095/latency-svc-4vj22 updated: 0 ports\nI1007 16:37:11.510450       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-4131/nodeport-test:http\\\" (:31230/tcp4)\"\nI1007 16:37:11.515625       1 service.go:306] Service svc-latency-3095/latency-svc-4vvjq updated: 0 ports\nI1007 16:37:11.519010       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.142805ms\"\nI1007 16:37:11.535768       1 service.go:306] Service svc-latency-3095/latency-svc-54bfc updated: 0 ports\nI1007 16:37:11.553389       1 service.go:306] Service svc-latency-3095/latency-svc-56lbz updated: 0 ports\nI1007 16:37:11.577287       1 service.go:306] Service svc-latency-3095/latency-svc-59bkz updated: 0 ports\nI1007 16:37:11.577324       1 service.go:306] Service svc-latency-3095/latency-svc-5cnrd updated: 0 ports\nI1007 16:37:11.587461       1 service.go:306] Service svc-latency-3095/latency-svc-5fgxs updated: 0 ports\nI1007 16:37:11.597295       1 service.go:306] Service svc-latency-3095/latency-svc-5j87t updated: 0 ports\nI1007 16:37:11.617403       1 service.go:306] Service svc-latency-3095/latency-svc-5ml24 updated: 0 ports\nI1007 16:37:11.626584       1 service.go:306] Service svc-latency-3095/latency-svc-5qcv8 updated: 0 ports\nI1007 16:37:11.640301       1 service.go:306] Service svc-latency-3095/latency-svc-5trpp updated: 0 ports\nI1007 16:37:11.649134       1 service.go:306] Service svc-latency-3095/latency-svc-5z94d updated: 0 ports\nI1007 16:37:11.666467       1 service.go:306] Service svc-latency-3095/latency-svc-66lt9 updated: 0 ports\nI1007 16:37:11.674884       1 service.go:306] Service svc-latency-3095/latency-svc-6c4j6 updated: 0 ports\nI1007 16:37:11.698176       1 service.go:306] Service svc-latency-3095/latency-svc-6ckxb updated: 0 ports\nI1007 16:37:11.705241       1 service.go:306] Service svc-latency-3095/latency-svc-6ldx5 updated: 0 ports\nI1007 16:37:11.713430       1 service.go:306] Service svc-latency-3095/latency-svc-75nss updated: 0 ports\nI1007 16:37:11.721801       1 service.go:306] Service svc-latency-3095/latency-svc-7bkhm updated: 0 ports\nI1007 16:37:11.731405       1 service.go:306] Service svc-latency-3095/latency-svc-7bxpc updated: 0 ports\nI1007 16:37:11.748053       1 service.go:306] Service svc-latency-3095/latency-svc-7dbpk updated: 0 ports\nI1007 16:37:11.756303       1 service.go:306] Service svc-latency-3095/latency-svc-7g7vf updated: 0 ports\nI1007 16:37:11.764777       1 service.go:306] Service svc-latency-3095/latency-svc-7qxlz updated: 0 ports\nI1007 16:37:11.777992       1 service.go:306] Service svc-latency-3095/latency-svc-7qzx2 updated: 0 ports\nI1007 16:37:11.785107       1 service.go:306] Service svc-latency-3095/latency-svc-7vbmf updated: 0 ports\nI1007 16:37:11.792958       1 service.go:306] Service svc-latency-3095/latency-svc-87x29 updated: 0 ports\nI1007 16:37:11.800707       1 service.go:306] Service svc-latency-3095/latency-svc-87xbm updated: 0 ports\nI1007 16:37:11.812230       1 service.go:306] Service svc-latency-3095/latency-svc-8kddc updated: 0 ports\nI1007 16:37:11.821747       1 service.go:306] Service svc-latency-3095/latency-svc-8lhmv updated: 0 ports\nI1007 16:37:11.828301       1 service.go:306] Service svc-latency-3095/latency-svc-8lr6f updated: 0 ports\nI1007 16:37:11.835262       1 service.go:306] Service svc-latency-3095/latency-svc-8msj6 updated: 0 ports\nI1007 16:37:11.847535       1 service.go:306] Service svc-latency-3095/latency-svc-8vhnw updated: 0 ports\nI1007 16:37:11.863651       1 service.go:306] Service svc-latency-3095/latency-svc-9442f updated: 0 ports\nI1007 16:37:11.877263       1 service.go:306] Service svc-latency-3095/latency-svc-98tvv updated: 0 ports\nI1007 16:37:11.893275       1 service.go:306] Service svc-latency-3095/latency-svc-99d25 updated: 0 ports\nI1007 16:37:11.905648       1 service.go:306] Service svc-latency-3095/latency-svc-9f5wr updated: 0 ports\nI1007 16:37:11.916653       1 service.go:306] Service svc-latency-3095/latency-svc-9kf42 updated: 0 ports\nI1007 16:37:11.932066       1 service.go:306] Service svc-latency-3095/latency-svc-9kfjx updated: 0 ports\nI1007 16:37:11.944226       1 service.go:306] Service svc-latency-3095/latency-svc-9ltd2 updated: 0 ports\nI1007 16:37:11.982402       1 service.go:306] Service svc-latency-3095/latency-svc-9nct6 updated: 0 ports\nI1007 16:37:11.990712       1 service.go:306] Service svc-latency-3095/latency-svc-9xp2v updated: 0 ports\nI1007 16:37:12.007871       1 service.go:306] Service svc-latency-3095/latency-svc-b6dh9 updated: 0 ports\nI1007 16:37:12.020734       1 service.go:306] Service svc-latency-3095/latency-svc-b6p8n updated: 0 ports\nI1007 16:37:12.038568       1 service.go:306] Service svc-latency-3095/latency-svc-b6w7q updated: 0 ports\nI1007 16:37:12.048814       1 service.go:306] Service svc-latency-3095/latency-svc-b87kg updated: 0 ports\nI1007 16:37:12.056298       1 service.go:306] Service svc-latency-3095/latency-svc-bbvgj updated: 0 ports\nI1007 16:37:12.068096       1 service.go:306] Service svc-latency-3095/latency-svc-bbzmr updated: 0 ports\nI1007 16:37:12.076035       1 service.go:306] Service svc-latency-3095/latency-svc-bls6s updated: 0 ports\nI1007 16:37:12.084979       1 service.go:306] Service svc-latency-3095/latency-svc-bq96v updated: 0 ports\nI1007 16:37:12.094202       1 service.go:306] Service svc-latency-3095/latency-svc-bwf5g updated: 0 ports\nI1007 16:37:12.102060       1 service.go:306] Service svc-latency-3095/latency-svc-bwj8l updated: 0 ports\nI1007 16:37:12.119281       1 service.go:306] Service svc-latency-3095/latency-svc-bwmg6 updated: 0 ports\nI1007 16:37:12.127497       1 service.go:306] Service svc-latency-3095/latency-svc-bx4kx updated: 0 ports\nI1007 16:37:12.135475       1 service.go:306] Service svc-latency-3095/latency-svc-c44lg updated: 0 ports\nI1007 16:37:12.143207       1 service.go:306] Service svc-latency-3095/latency-svc-c5npb updated: 0 ports\nI1007 16:37:12.150319       1 service.go:306] Service svc-latency-3095/latency-svc-c7lkz updated: 0 ports\nI1007 16:37:12.159014       1 service.go:306] Service svc-latency-3095/latency-svc-c7wm9 updated: 0 ports\nI1007 16:37:12.179500       1 service.go:306] Service svc-latency-3095/latency-svc-cmd6h updated: 0 ports\nI1007 16:37:12.187111       1 service.go:306] Service svc-latency-3095/latency-svc-ctvfm updated: 0 ports\nI1007 16:37:12.193817       1 service.go:306] Service svc-latency-3095/latency-svc-cv7zk updated: 0 ports\nI1007 16:37:12.203824       1 service.go:306] Service svc-latency-3095/latency-svc-cvhcr updated: 0 ports\nI1007 16:37:12.210120       1 service.go:306] Service svc-latency-3095/latency-svc-d2wzc updated: 0 ports\nI1007 16:37:12.219015       1 service.go:306] Service svc-latency-3095/latency-svc-d5zn8 updated: 0 ports\nI1007 16:37:12.225351       1 service.go:306] Service svc-latency-3095/latency-svc-dm87n updated: 0 ports\nI1007 16:37:12.233728       1 service.go:306] Service svc-latency-3095/latency-svc-dsjks updated: 0 ports\nI1007 16:37:12.256392       1 service.go:306] Service svc-latency-3095/latency-svc-f4n9c updated: 0 ports\nI1007 16:37:12.265208       1 service.go:306] Service svc-latency-3095/latency-svc-f5hjw updated: 0 ports\nI1007 16:37:12.281757       1 service.go:306] Service svc-latency-3095/latency-svc-f6hft updated: 0 ports\nI1007 16:37:12.288986       1 service.go:306] Service svc-latency-3095/latency-svc-f985x updated: 0 ports\nI1007 16:37:12.297079       1 service.go:306] Service svc-latency-3095/latency-svc-fccbn updated: 0 ports\nI1007 16:37:12.305706       1 service.go:306] Service svc-latency-3095/latency-svc-fdk5t updated: 0 ports\nI1007 16:37:12.320074       1 service.go:306] Service svc-latency-3095/latency-svc-ff565 updated: 0 ports\nI1007 16:37:12.328654       1 service.go:306] Service svc-latency-3095/latency-svc-fkcz9 updated: 0 ports\nI1007 16:37:12.335224       1 service.go:306] Service svc-latency-3095/latency-svc-fl9bx updated: 0 ports\nI1007 16:37:12.349433       1 service.go:306] Service svc-latency-3095/latency-svc-fljkf updated: 0 ports\nI1007 16:37:12.362038       1 service.go:306] Service svc-latency-3095/latency-svc-fll55 updated: 0 ports\nI1007 16:37:12.369664       1 service.go:306] Service svc-latency-3095/latency-svc-frh6j updated: 0 ports\nI1007 16:37:12.377673       1 service.go:306] Service svc-latency-3095/latency-svc-fzzx6 updated: 0 ports\nI1007 16:37:12.391119       1 service.go:306] Service svc-latency-3095/latency-svc-g5qn4 updated: 0 ports\nI1007 16:37:12.398451       1 service.go:306] Service svc-latency-3095/latency-svc-gbf5h updated: 0 ports\nI1007 16:37:12.409854       1 service.go:306] Service svc-latency-3095/latency-svc-gccbm updated: 0 ports\nI1007 16:37:12.421604       1 service.go:306] Service svc-latency-3095/latency-svc-gllh2 updated: 0 ports\nI1007 16:37:12.430905       1 service.go:306] Service svc-latency-3095/latency-svc-gpfnm updated: 0 ports\nI1007 16:37:12.438548       1 service.go:306] Service svc-latency-3095/latency-svc-gsmwm updated: 0 ports\nI1007 16:37:12.446787       1 service.go:306] Service svc-latency-3095/latency-svc-gvr8j updated: 0 ports\nI1007 16:37:12.465248       1 service.go:306] Service svc-latency-3095/latency-svc-gwngf updated: 0 ports\nI1007 16:37:12.465314       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bx4kx\"\nI1007 16:37:12.465361       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gsmwm\"\nI1007 16:37:12.465409       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-6ldx5\"\nI1007 16:37:12.465464       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7bkhm\"\nI1007 16:37:12.465501       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9442f\"\nI1007 16:37:12.465513       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9ltd2\"\nI1007 16:37:12.465546       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bbvgj\"\nI1007 16:37:12.465557       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bq96v\"\nI1007 16:37:12.465566       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5cnrd\"\nI1007 16:37:12.465575       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7vbmf\"\nI1007 16:37:12.465584       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-8lhmv\"\nI1007 16:37:12.465592       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bwj8l\"\nI1007 16:37:12.465630       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9kf42\"\nI1007 16:37:12.465641       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-b6p8n\"\nI1007 16:37:12.465650       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-d5zn8\"\nI1007 16:37:12.465658       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-dsjks\"\nI1007 16:37:12.465667       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-f6hft\"\nI1007 16:37:12.465677       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fll55\"\nI1007 16:37:12.465706       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7g7vf\"\nI1007 16:37:12.465723       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-87xbm\"\nI1007 16:37:12.465741       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4t54c\"\nI1007 16:37:12.465751       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4vj22\"\nI1007 16:37:12.465759       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-59bkz\"\nI1007 16:37:12.465789       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5trpp\"\nI1007 16:37:12.465800       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-6c4j6\"\nI1007 16:37:12.465813       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-6ckxb\"\nI1007 16:37:12.465829       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-b6w7q\"\nI1007 16:37:12.465838       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bwf5g\"\nI1007 16:37:12.465846       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-c7lkz\"\nI1007 16:37:12.465879       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-f985x\"\nI1007 16:37:12.465887       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fccbn\"\nI1007 16:37:12.465895       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5ml24\"\nI1007 16:37:12.465903       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-75nss\"\nI1007 16:37:12.465912       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-8lr6f\"\nI1007 16:37:12.465926       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-b6dh9\"\nI1007 16:37:12.465957       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bbzmr\"\nI1007 16:37:12.465967       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-f5hjw\"\nI1007 16:37:12.465975       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-54bfc\"\nI1007 16:37:12.465983       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ctvfm\"\nI1007 16:37:12.465994       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ff565\"\nI1007 16:37:12.466003       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gwngf\"\nI1007 16:37:12.466035       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-8kddc\"\nI1007 16:37:12.466046       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-cv7zk\"\nI1007 16:37:12.466053       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fl9bx\"\nI1007 16:37:12.466060       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fljkf\"\nI1007 16:37:12.466067       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fzzx6\"\nI1007 16:37:12.466164       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-99d25\"\nI1007 16:37:12.466243       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9kfjx\"\nI1007 16:37:12.466320       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9xp2v\"\nI1007 16:37:12.466381       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-c5npb\"\nI1007 16:37:12.466393       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fkcz9\"\nI1007 16:37:12.466401       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-66lt9\"\nI1007 16:37:12.466446       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7qxlz\"\nI1007 16:37:12.466457       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-c7wm9\"\nI1007 16:37:12.466466       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gpfnm\"\nI1007 16:37:12.466475       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5fgxs\"\nI1007 16:37:12.466484       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7qzx2\"\nI1007 16:37:12.466492       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-87x29\"\nI1007 16:37:12.466534       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-8vhnw\"\nI1007 16:37:12.466544       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-98tvv\"\nI1007 16:37:12.466553       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-c44lg\"\nI1007 16:37:12.466562       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-56lbz\"\nI1007 16:37:12.466571       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9nct6\"\nI1007 16:37:12.466610       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-g5qn4\"\nI1007 16:37:12.466619       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7dbpk\"\nI1007 16:37:12.466630       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-8msj6\"\nI1007 16:37:12.466638       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-d2wzc\"\nI1007 16:37:12.466681       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-4vvjq\"\nI1007 16:37:12.466693       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gccbm\"\nI1007 16:37:12.466712       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gllh2\"\nI1007 16:37:12.466720       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5z94d\"\nI1007 16:37:12.466728       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-9f5wr\"\nI1007 16:37:12.466737       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bls6s\"\nI1007 16:37:12.466769       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-bwmg6\"\nI1007 16:37:12.466779       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gbf5h\"\nI1007 16:37:12.466798       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5j87t\"\nI1007 16:37:12.466805       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-5qcv8\"\nI1007 16:37:12.466813       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-b87kg\"\nI1007 16:37:12.466847       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-cmd6h\"\nI1007 16:37:12.466862       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-f4n9c\"\nI1007 16:37:12.466878       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-fdk5t\"\nI1007 16:37:12.466888       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-7bxpc\"\nI1007 16:37:12.466896       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-cvhcr\"\nI1007 16:37:12.466904       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-dm87n\"\nI1007 16:37:12.466934       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-frh6j\"\nI1007 16:37:12.466944       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gvr8j\"\nI1007 16:37:12.467116       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:12.481923       1 service.go:306] Service svc-latency-3095/latency-svc-gwt6k updated: 0 ports\nI1007 16:37:12.497482       1 service.go:306] Service svc-latency-3095/latency-svc-gxfhn updated: 0 ports\nI1007 16:37:12.508130       1 service.go:306] Service svc-latency-3095/latency-svc-h4dtg updated: 0 ports\nI1007 16:37:12.512085       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.765157ms\"\nI1007 16:37:12.523275       1 service.go:306] Service svc-latency-3095/latency-svc-h6b2q updated: 0 ports\nI1007 16:37:12.553030       1 service.go:306] Service svc-latency-3095/latency-svc-h6c5s updated: 0 ports\nI1007 16:37:12.573658       1 service.go:306] Service svc-latency-3095/latency-svc-h7d26 updated: 0 ports\nI1007 16:37:12.595625       1 service.go:306] Service svc-latency-3095/latency-svc-hbpf7 updated: 0 ports\nI1007 16:37:12.622746       1 service.go:306] Service svc-latency-3095/latency-svc-hm4s8 updated: 0 ports\nI1007 16:37:12.632352       1 service.go:306] Service svc-latency-3095/latency-svc-hmsbs updated: 0 ports\nI1007 16:37:12.646705       1 service.go:306] Service svc-latency-3095/latency-svc-hnpw7 updated: 0 ports\nI1007 16:37:12.675272       1 service.go:306] Service svc-latency-3095/latency-svc-jc8j5 updated: 0 ports\nI1007 16:37:12.697302       1 service.go:306] Service svc-latency-3095/latency-svc-jl5rj updated: 0 ports\nI1007 16:37:12.719606       1 service.go:306] Service svc-latency-3095/latency-svc-jlrcx updated: 0 ports\nI1007 16:37:12.736197       1 service.go:306] Service svc-latency-3095/latency-svc-jnzmw updated: 0 ports\nI1007 16:37:12.751550       1 service.go:306] Service svc-latency-3095/latency-svc-jwjg7 updated: 0 ports\nI1007 16:37:12.773615       1 service.go:306] Service svc-latency-3095/latency-svc-jx2gg updated: 0 ports\nI1007 16:37:12.786393       1 service.go:306] Service svc-latency-3095/latency-svc-k5wlj updated: 0 ports\nI1007 16:37:12.803969       1 service.go:306] Service svc-latency-3095/latency-svc-k6gtd updated: 0 ports\nI1007 16:37:12.820590       1 service.go:306] Service svc-latency-3095/latency-svc-k8g55 updated: 0 ports\nI1007 16:37:12.833046       1 service.go:306] Service svc-latency-3095/latency-svc-kf28m updated: 0 ports\nI1007 16:37:12.844436       1 service.go:306] Service svc-latency-3095/latency-svc-kkd2v updated: 0 ports\nI1007 16:37:12.859080       1 service.go:306] Service svc-latency-3095/latency-svc-kkldd updated: 0 ports\nI1007 16:37:12.872544       1 service.go:306] Service svc-latency-3095/latency-svc-kn9r7 updated: 0 ports\nI1007 16:37:12.884589       1 service.go:306] Service svc-latency-3095/latency-svc-kpg48 updated: 0 ports\nI1007 16:37:12.898827       1 service.go:306] Service svc-latency-3095/latency-svc-kqxdp updated: 0 ports\nI1007 16:37:12.941801       1 service.go:306] Service svc-latency-3095/latency-svc-ktqvq updated: 0 ports\nI1007 16:37:12.956760       1 service.go:306] Service svc-latency-3095/latency-svc-kxdkg updated: 0 ports\nI1007 16:37:12.971910       1 service.go:306] Service svc-latency-3095/latency-svc-kxrks updated: 0 ports\nI1007 16:37:12.985204       1 service.go:306] Service svc-latency-3095/latency-svc-l6t5w updated: 0 ports\nI1007 16:37:13.015029       1 service.go:306] Service svc-latency-3095/latency-svc-l9k59 updated: 0 ports\nI1007 16:37:13.015064       1 service.go:306] Service svc-latency-3095/latency-svc-ldxll updated: 0 ports\nI1007 16:37:13.024164       1 service.go:306] Service svc-latency-3095/latency-svc-lgr2h updated: 0 ports\nI1007 16:37:13.034250       1 service.go:306] Service svc-latency-3095/latency-svc-lr28t updated: 0 ports\nI1007 16:37:13.060414       1 service.go:306] Service svc-latency-3095/latency-svc-lxrsz updated: 0 ports\nI1007 16:37:13.077162       1 service.go:306] Service svc-latency-3095/latency-svc-m6w6x updated: 0 ports\nI1007 16:37:13.093725       1 service.go:306] Service svc-latency-3095/latency-svc-m8mpl updated: 0 ports\nI1007 16:37:13.104256       1 service.go:306] Service svc-latency-3095/latency-svc-mdv9v updated: 0 ports\nI1007 16:37:13.116715       1 service.go:306] Service svc-latency-3095/latency-svc-mf8w8 updated: 0 ports\nI1007 16:37:13.124635       1 service.go:306] Service svc-latency-3095/latency-svc-mg7xm updated: 0 ports\nI1007 16:37:13.135517       1 service.go:306] Service svc-latency-3095/latency-svc-mlr6d updated: 0 ports\nI1007 16:37:13.154702       1 service.go:306] Service svc-latency-3095/latency-svc-mmps4 updated: 0 ports\nI1007 16:37:13.170455       1 service.go:306] Service svc-latency-3095/latency-svc-mp5dk updated: 0 ports\nI1007 16:37:13.181589       1 service.go:306] Service svc-latency-3095/latency-svc-mvf29 updated: 0 ports\nI1007 16:37:13.192799       1 service.go:306] Service svc-latency-3095/latency-svc-mxnnz updated: 0 ports\nI1007 16:37:13.216357       1 service.go:306] Service svc-latency-3095/latency-svc-n2kdw updated: 0 ports\nI1007 16:37:13.227926       1 service.go:306] Service svc-latency-3095/latency-svc-nj8wz updated: 0 ports\nI1007 16:37:13.241367       1 service.go:306] Service svc-latency-3095/latency-svc-nnsdl updated: 0 ports\nI1007 16:37:13.255881       1 service.go:306] Service svc-latency-3095/latency-svc-p68ff updated: 0 ports\nI1007 16:37:13.278608       1 service.go:306] Service svc-latency-3095/latency-svc-pgdl8 updated: 0 ports\nI1007 16:37:13.310610       1 service.go:306] Service svc-latency-3095/latency-svc-pnlg5 updated: 0 ports\nI1007 16:37:13.325405       1 service.go:306] Service svc-latency-3095/latency-svc-ppwwt updated: 0 ports\nI1007 16:37:13.335078       1 service.go:306] Service svc-latency-3095/latency-svc-psg5k updated: 0 ports\nI1007 16:37:13.348258       1 service.go:306] Service svc-latency-3095/latency-svc-psrmb updated: 0 ports\nI1007 16:37:13.385635       1 service.go:306] Service svc-latency-3095/latency-svc-pzmln updated: 0 ports\nI1007 16:37:13.398124       1 service.go:306] Service svc-latency-3095/latency-svc-q68gs updated: 0 ports\nI1007 16:37:13.419023       1 service.go:306] Service svc-latency-3095/latency-svc-qbzj7 updated: 0 ports\nI1007 16:37:13.430561       1 service.go:306] Service svc-latency-3095/latency-svc-qcstz updated: 0 ports\nI1007 16:37:13.449523       1 service.go:306] Service svc-latency-3095/latency-svc-qpktt updated: 0 ports\nI1007 16:37:13.461258       1 service.go:306] Service svc-latency-3095/latency-svc-qt6bg updated: 0 ports\nI1007 16:37:13.485303       1 service.go:306] Service svc-latency-3095/latency-svc-qtsrz updated: 0 ports\nI1007 16:37:13.485365       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-hnpw7\"\nI1007 16:37:13.485382       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-qcstz\"\nI1007 16:37:13.485390       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jlrcx\"\nI1007 16:37:13.485397       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ktqvq\"\nI1007 16:37:13.485404       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-lxrsz\"\nI1007 16:37:13.485411       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-psrmb\"\nI1007 16:37:13.485418       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-pzmln\"\nI1007 16:37:13.485439       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-qtsrz\"\nI1007 16:37:13.485446       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-h7d26\"\nI1007 16:37:13.485453       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-hm4s8\"\nI1007 16:37:13.485460       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kpg48\"\nI1007 16:37:13.485467       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jx2gg\"\nI1007 16:37:13.485474       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kkldd\"\nI1007 16:37:13.485481       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-lr28t\"\nI1007 16:37:13.485492       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mp5dk\"\nI1007 16:37:13.485515       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mvf29\"\nI1007 16:37:13.485526       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-qbzj7\"\nI1007 16:37:13.485533       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-qt6bg\"\nI1007 16:37:13.485540       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-hbpf7\"\nI1007 16:37:13.485546       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jnzmw\"\nI1007 16:37:13.485554       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-m8mpl\"\nI1007 16:37:13.485561       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-pgdl8\"\nI1007 16:37:13.485567       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kf28m\"\nI1007 16:37:13.485574       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-lgr2h\"\nI1007 16:37:13.485595       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ldxll\"\nI1007 16:37:13.485603       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-h6b2q\"\nI1007 16:37:13.485611       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jc8j5\"\nI1007 16:37:13.485619       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-k8g55\"\nI1007 16:37:13.485627       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mg7xm\"\nI1007 16:37:13.485635       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-p68ff\"\nI1007 16:37:13.485643       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-qpktt\"\nI1007 16:37:13.485651       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-h4dtg\"\nI1007 16:37:13.485674       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-hmsbs\"\nI1007 16:37:13.485681       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-nnsdl\"\nI1007 16:37:13.485689       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-k6gtd\"\nI1007 16:37:13.485696       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-m6w6x\"\nI1007 16:37:13.485703       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-q68gs\"\nI1007 16:37:13.485710       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-n2kdw\"\nI1007 16:37:13.485719       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ppwwt\"\nI1007 16:37:13.485728       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kqxdp\"\nI1007 16:37:13.485762       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mxnnz\"\nI1007 16:37:13.485771       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-h6c5s\"\nI1007 16:37:13.485779       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kn9r7\"\nI1007 16:37:13.485786       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kxdkg\"\nI1007 16:37:13.485794       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-l6t5w\"\nI1007 16:37:13.485802       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gxfhn\"\nI1007 16:37:13.485810       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kkd2v\"\nI1007 16:37:13.485833       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mf8w8\"\nI1007 16:37:13.485842       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-gwt6k\"\nI1007 16:37:13.485849       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mdv9v\"\nI1007 16:37:13.485857       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-kxrks\"\nI1007 16:37:13.485865       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-nj8wz\"\nI1007 16:37:13.485872       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-psg5k\"\nI1007 16:37:13.485880       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jl5rj\"\nI1007 16:37:13.485908       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-k5wlj\"\nI1007 16:37:13.485916       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mmps4\"\nI1007 16:37:13.485924       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-pnlg5\"\nI1007 16:37:13.485931       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-jwjg7\"\nI1007 16:37:13.485939       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-l9k59\"\nI1007 16:37:13.485946       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-mlr6d\"\nI1007 16:37:13.486221       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:13.507105       1 service.go:306] Service svc-latency-3095/latency-svc-r44kh updated: 0 ports\nI1007 16:37:13.543167       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.81109ms\"\nI1007 16:37:13.546609       1 service.go:306] Service svc-latency-3095/latency-svc-rddgn updated: 0 ports\nI1007 16:37:13.578857       1 service.go:306] Service svc-latency-3095/latency-svc-rhv4z updated: 0 ports\nI1007 16:37:13.628001       1 service.go:306] Service svc-latency-3095/latency-svc-rlsbv updated: 0 ports\nI1007 16:37:13.643359       1 service.go:306] Service provisioning-3382-7701/csi-hostpathplugin updated: 1 ports\nI1007 16:37:13.651724       1 service.go:306] Service svc-latency-3095/latency-svc-rmbrm updated: 0 ports\nI1007 16:37:13.667194       1 service.go:306] Service svc-latency-3095/latency-svc-s22cz updated: 0 ports\nI1007 16:37:13.690686       1 service.go:306] Service svc-latency-3095/latency-svc-s4dzf updated: 0 ports\nI1007 16:37:13.703692       1 service.go:306] Service svc-latency-3095/latency-svc-s699j updated: 0 ports\nI1007 16:37:13.723528       1 service.go:306] Service svc-latency-3095/latency-svc-s6cc6 updated: 0 ports\nI1007 16:37:13.770072       1 service.go:306] Service svc-latency-3095/latency-svc-szh7f updated: 0 ports\nI1007 16:37:13.780971       1 service.go:306] Service svc-latency-3095/latency-svc-t2lf5 updated: 0 ports\nI1007 16:37:13.801845       1 service.go:306] Service svc-latency-3095/latency-svc-t6qzt updated: 0 ports\nI1007 16:37:13.836677       1 service.go:306] Service svc-latency-3095/latency-svc-t8hsl updated: 0 ports\nI1007 16:37:13.850507       1 service.go:306] Service svc-latency-3095/latency-svc-tcwds updated: 0 ports\nI1007 16:37:13.872161       1 service.go:306] Service svc-latency-3095/latency-svc-tdxmn updated: 0 ports\nI1007 16:37:13.895581       1 service.go:306] Service svc-latency-3095/latency-svc-tslc2 updated: 0 ports\nI1007 16:37:13.914236       1 service.go:306] Service svc-latency-3095/latency-svc-v929g updated: 0 ports\nI1007 16:37:13.925895       1 service.go:306] Service svc-latency-3095/latency-svc-vbzgw updated: 0 ports\nI1007 16:37:13.944714       1 service.go:306] Service svc-latency-3095/latency-svc-vv22v updated: 0 ports\nI1007 16:37:14.037346       1 service.go:306] Service svc-latency-3095/latency-svc-w26cn updated: 0 ports\nI1007 16:37:14.087980       1 service.go:306] Service svc-latency-3095/latency-svc-w4bs6 updated: 0 ports\nI1007 16:37:14.128065       1 service.go:306] Service svc-latency-3095/latency-svc-w6pjg updated: 0 ports\nI1007 16:37:14.151929       1 service.go:306] Service svc-latency-3095/latency-svc-wbqq9 updated: 0 ports\nI1007 16:37:14.185713       1 service.go:306] Service svc-latency-3095/latency-svc-wmvws updated: 0 ports\nI1007 16:37:14.196347       1 service.go:306] Service svc-latency-3095/latency-svc-wn7m9 updated: 0 ports\nI1007 16:37:14.208279       1 service.go:306] Service svc-latency-3095/latency-svc-wqtgd updated: 0 ports\nI1007 16:37:14.222252       1 service.go:306] Service svc-latency-3095/latency-svc-wtctt updated: 0 ports\nI1007 16:37:14.238859       1 service.go:306] Service svc-latency-3095/latency-svc-wtczd updated: 0 ports\nI1007 16:37:14.261433       1 service.go:306] Service svc-latency-3095/latency-svc-xhbbx updated: 0 ports\nI1007 16:37:14.272170       1 service.go:306] Service svc-latency-3095/latency-svc-xkxhn updated: 0 ports\nI1007 16:37:14.279440       1 service.go:306] Service svc-latency-3095/latency-svc-xrq6s updated: 0 ports\nI1007 16:37:14.293677       1 service.go:306] Service svc-latency-3095/latency-svc-xv69n updated: 0 ports\nI1007 16:37:14.327726       1 service.go:306] Service svc-latency-3095/latency-svc-xznls updated: 0 ports\nI1007 16:37:14.338143       1 service.go:306] Service svc-latency-3095/latency-svc-z2zbj updated: 0 ports\nI1007 16:37:14.353344       1 service.go:306] Service svc-latency-3095/latency-svc-z6jb8 updated: 0 ports\nI1007 16:37:14.369083       1 service.go:306] Service svc-latency-3095/latency-svc-z6sb8 updated: 0 ports\nI1007 16:37:14.380561       1 service.go:306] Service svc-latency-3095/latency-svc-z75hs updated: 0 ports\nI1007 16:37:14.416541       1 service.go:306] Service svc-latency-3095/latency-svc-zdgxl updated: 0 ports\nI1007 16:37:14.444345       1 service.go:306] Service svc-latency-3095/latency-svc-zpzks updated: 0 ports\nI1007 16:37:14.472128       1 service.go:306] Service svc-latency-3095/latency-svc-ztbx7 updated: 0 ports\nI1007 16:37:14.472376       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-r44kh\"\nI1007 16:37:14.472420       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-w26cn\"\nI1007 16:37:14.472444       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-xznls\"\nI1007 16:37:14.472518       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-ztbx7\"\nI1007 16:37:14.472623       1 service.go:421] Adding new service port \"provisioning-3382-7701/csi-hostpathplugin:dummy\" at 100.71.44.4:12345/TCP\nI1007 16:37:14.472648       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-s699j\"\nI1007 16:37:14.472730       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-v929g\"\nI1007 16:37:14.472745       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-w6pjg\"\nI1007 16:37:14.472783       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wn7m9\"\nI1007 16:37:14.472799       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-t2lf5\"\nI1007 16:37:14.472857       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wtczd\"\nI1007 16:37:14.472872       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-xrq6s\"\nI1007 16:37:14.472880       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-z6sb8\"\nI1007 16:37:14.472890       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-rddgn\"\nI1007 16:37:14.472898       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wtctt\"\nI1007 16:37:14.472906       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-zdgxl\"\nI1007 16:37:14.472934       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-z6jb8\"\nI1007 16:37:14.472956       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-szh7f\"\nI1007 16:37:14.472978       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-tdxmn\"\nI1007 16:37:14.473001       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wbqq9\"\nI1007 16:37:14.473035       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wqtgd\"\nI1007 16:37:14.473087       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-z2zbj\"\nI1007 16:37:14.473143       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-zpzks\"\nI1007 16:37:14.473159       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-s22cz\"\nI1007 16:37:14.473167       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-t6qzt\"\nI1007 16:37:14.473253       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-t8hsl\"\nI1007 16:37:14.473267       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-tslc2\"\nI1007 16:37:14.473277       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-vv22v\"\nI1007 16:37:14.473285       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-rhv4z\"\nI1007 16:37:14.473294       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-rlsbv\"\nI1007 16:37:14.473344       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-wmvws\"\nI1007 16:37:14.473398       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-xkxhn\"\nI1007 16:37:14.473411       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-z75hs\"\nI1007 16:37:14.473419       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-w4bs6\"\nI1007 16:37:14.473427       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-xhbbx\"\nI1007 16:37:14.473435       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-xv69n\"\nI1007 16:37:14.473466       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-rmbrm\"\nI1007 16:37:14.473501       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-s4dzf\"\nI1007 16:37:14.473546       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-s6cc6\"\nI1007 16:37:14.473563       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-tcwds\"\nI1007 16:37:14.473572       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-vbzgw\"\nI1007 16:37:14.473725       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:14.508371       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.98404ms\"\nI1007 16:37:14.525264       1 service.go:306] Service svc-latency-3095/latency-svc-zvn2p updated: 0 ports\nI1007 16:37:14.556063       1 service.go:306] Service svc-latency-3095/latency-svc-zzzr9 updated: 0 ports\nI1007 16:37:15.512487       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-zvn2p\"\nI1007 16:37:15.512524       1 service.go:446] Removing service port \"svc-latency-3095/latency-svc-zzzr9\"\nI1007 16:37:15.512674       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:15.694778       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"182.287958ms\"\nI1007 16:37:16.695081       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:16.729116       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.185329ms\"\nI1007 16:37:17.729735       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:17.762623       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.007869ms\"\nI1007 16:37:18.503812       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:18.621334       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"117.652165ms\"\nI1007 16:37:18.658311       1 service.go:306] Service ephemeral-8231-6366/csi-hostpathplugin updated: 0 ports\nI1007 16:37:19.623452       1 service.go:446] Removing service port \"ephemeral-8231-6366/csi-hostpathplugin:dummy\"\nI1007 16:37:19.624411       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:19.712745       1 service.go:306] Service ephemeral-9076-5159/csi-hostpathplugin updated: 0 ports\nI1007 16:37:19.752277       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"128.822208ms\"\nI1007 16:37:19.978383       1 service.go:306] Service kubectl-278/rm2 updated: 1 ports\nI1007 16:37:20.752793       1 service.go:421] Adding new service port \"kubectl-278/rm2\" at 100.64.31.146:1234/TCP\nI1007 16:37:20.752847       1 service.go:446] Removing service port \"ephemeral-9076-5159/csi-hostpathplugin:dummy\"\nI1007 16:37:20.753030       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:20.791092       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.319359ms\"\nI1007 16:37:24.975134       1 service.go:306] Service volume-expand-1606-6948/csi-hostpathplugin updated: 0 ports\nI1007 16:37:24.975228       1 service.go:446] Removing service port \"volume-expand-1606-6948/csi-hostpathplugin:dummy\"\nI1007 16:37:24.975472       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:25.015921       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.679961ms\"\nI1007 16:37:25.016119       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:25.057854       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.875436ms\"\nI1007 16:37:25.125031       1 service.go:306] Service kubectl-278/rm3 updated: 1 ports\nI1007 16:37:26.058996       1 service.go:421] Adding new service port \"kubectl-278/rm3\" at 100.70.34.105:2345/TCP\nI1007 16:37:26.059287       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:26.103063       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.081521ms\"\nI1007 16:37:27.103937       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:27.141066       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.277968ms\"\nI1007 16:37:32.937169       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:32.995024       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.979135ms\"\nI1007 16:37:32.995193       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:33.031361       1 service.go:306] Service kubectl-278/rm2 updated: 0 ports\nI1007 16:37:33.052963       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.892182ms\"\nI1007 16:37:33.054302       1 service.go:306] Service kubectl-278/rm3 updated: 0 ports\nI1007 16:37:34.053116       1 service.go:446] Removing service port \"kubectl-278/rm2\"\nI1007 16:37:34.053161       1 service.go:446] Removing service port \"kubectl-278/rm3\"\nI1007 16:37:34.053326       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:34.103731       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.613508ms\"\nI1007 16:37:35.104045       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:35.144385       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.506931ms\"\nI1007 16:37:36.675250       1 service.go:306] Service volume-expand-3529-148/csi-hostpathplugin updated: 0 ports\nI1007 16:37:36.675294       1 service.go:446] Removing service port \"volume-expand-3529-148/csi-hostpathplugin:dummy\"\nI1007 16:37:36.675479       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:36.711560       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.259733ms\"\nI1007 16:37:37.711822       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:37.752054       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.362066ms\"\nI1007 16:37:39.983261       1 service.go:306] Service services-1672/nodeport-range-test updated: 1 ports\nI1007 16:37:39.983305       1 service.go:421] Adding new service port \"services-1672/nodeport-range-test\" at 100.69.136.134:80/TCP\nI1007 16:37:39.983439       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:40.070595       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-1672/nodeport-range-test\\\" (:32164/tcp4)\"\nI1007 16:37:40.084382       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.065016ms\"\nI1007 16:37:40.084538       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:40.166001       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.572738ms\"\nI1007 16:37:40.421211       1 service.go:306] Service services-1672/nodeport-range-test updated: 0 ports\nI1007 16:37:41.166156       1 service.go:446] Removing service port \"services-1672/nodeport-range-test\"\nI1007 16:37:41.166315       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:41.201457       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.303188ms\"\nI1007 16:37:44.132603       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:44.184182       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.691798ms\"\nI1007 16:37:44.188195       1 service.go:306] Service services-1343/no-pods updated: 0 ports\nI1007 16:37:44.188231       1 service.go:446] Removing service port \"services-1343/no-pods\"\nI1007 16:37:44.188380       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:44.242978       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.731699ms\"\nI1007 16:37:45.243295       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:45.277579       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.416044ms\"\nI1007 16:37:48.490082       1 service.go:306] Service services-4131/nodeport-test updated: 0 ports\nI1007 16:37:48.490149       1 service.go:446] Removing service port \"services-4131/nodeport-test:http\"\nI1007 16:37:48.490364       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:48.525817       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.65811ms\"\nI1007 16:37:48.526058       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:48.560195       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.338771ms\"\nI1007 16:37:48.687626       1 service.go:306] Service dns-2292/test-service-2 updated: 1 ports\nI1007 16:37:49.560348       1 service.go:421] Adding new service port \"dns-2292/test-service-2:http\" at 100.69.138.93:80/TCP\nI1007 16:37:49.560508       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:49.621785       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.450085ms\"\nI1007 16:37:50.152540       1 service.go:306] Service webhook-8807/e2e-test-webhook updated: 1 ports\nI1007 16:37:50.621957       1 service.go:421] Adding new service port \"webhook-8807/e2e-test-webhook\" at 100.66.147.238:8443/TCP\nI1007 16:37:50.622148       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:50.675113       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.176342ms\"\nI1007 16:37:53.773869       1 service.go:306] Service ephemeral-9485-6661/csi-hostpathplugin updated: 1 ports\nI1007 16:37:53.773912       1 service.go:421] Adding new service port \"ephemeral-9485-6661/csi-hostpathplugin:dummy\" at 100.68.119.36:12345/TCP\nI1007 16:37:53.774038       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:53.819341       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.416961ms\"\nI1007 16:37:53.819611       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:53.857712       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.211121ms\"\nI1007 16:37:57.105614       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:57.157631       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.084721ms\"\nI1007 16:37:59.735793       1 service.go:306] Service webhook-8807/e2e-test-webhook updated: 0 ports\nI1007 16:37:59.735833       1 service.go:446] Removing service port \"webhook-8807/e2e-test-webhook\"\nI1007 16:37:59.735963       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:59.778119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.269793ms\"\nI1007 16:37:59.778271       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:37:59.819854       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.691809ms\"\nI1007 16:38:03.706848       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:03.771548       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.896167ms\"\nI1007 16:38:05.506236       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:05.562759       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.636431ms\"\nI1007 16:38:06.691665       1 service.go:306] Service provisioning-3382-7701/csi-hostpathplugin updated: 0 ports\nI1007 16:38:06.691704       1 service.go:446] Removing service port \"provisioning-3382-7701/csi-hostpathplugin:dummy\"\nI1007 16:38:06.691812       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:06.740027       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.310252ms\"\nI1007 16:38:06.740329       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:06.778782       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.5752ms\"\nI1007 16:38:13.702998       1 service.go:306] Service services-7878/service-proxy-toggled updated: 1 ports\nI1007 16:38:13.703054       1 service.go:421] Adding new service port \"services-7878/service-proxy-toggled\" at 100.65.1.182:80/TCP\nI1007 16:38:13.703178       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:13.804749       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.680948ms\"\nI1007 16:38:13.805045       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:13.848025       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.232176ms\"\nI1007 16:38:15.162412       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:15.195585       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.262612ms\"\nI1007 16:38:16.199667       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:16.287714       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.219493ms\"\nI1007 16:38:19.091233       1 service.go:306] Service kubectl-945/agnhost-replica updated: 0 ports\nI1007 16:38:19.091278       1 service.go:446] Removing service port \"kubectl-945/agnhost-replica\"\nI1007 16:38:19.091388       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:19.170172       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.880665ms\"\nI1007 16:38:19.170428       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:19.273531       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"103.316072ms\"\nI1007 16:38:19.779273       1 service.go:306] Service kubectl-945/agnhost-primary updated: 0 ports\nI1007 16:38:20.273988       1 service.go:446] Removing service port \"kubectl-945/agnhost-primary\"\nI1007 16:38:20.274169       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:20.326862       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.871341ms\"\nI1007 16:38:20.464620       1 service.go:306] Service kubectl-945/frontend updated: 0 ports\nI1007 16:38:21.327018       1 service.go:446] Removing service port \"kubectl-945/frontend\"\nI1007 16:38:21.327214       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:21.383271       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.25978ms\"\nI1007 16:38:27.399163       1 service.go:306] Service dns-262/test-service-2 updated: 1 ports\nI1007 16:38:27.399209       1 service.go:421] Adding new service port \"dns-262/test-service-2:http\" at 100.65.177.111:80/TCP\nI1007 16:38:27.399330       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:27.437850       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.633576ms\"\nI1007 16:38:27.437993       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:27.474025       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.131568ms\"\nI1007 16:38:29.731481       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:29.774429       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.02185ms\"\nI1007 16:38:31.341063       1 service.go:306] Service resourcequota-7670/test-service updated: 1 ports\nI1007 16:38:31.341108       1 service.go:421] Adding new service port \"resourcequota-7670/test-service\" at 100.66.220.16:80/TCP\nI1007 16:38:31.341230       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:31.378098       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.985398ms\"\nI1007 16:38:31.501129       1 service.go:306] Service resourcequota-7670/test-service-np updated: 1 ports\nI1007 16:38:31.501200       1 service.go:421] Adding new service port \"resourcequota-7670/test-service-np\" at 100.66.163.117:80/TCP\nI1007 16:38:31.501637       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:31.534541       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for resourcequota-7670/test-service-np\\\" (:31187/tcp4)\"\nI1007 16:38:31.539410       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.226629ms\"\nI1007 16:38:33.949858       1 service.go:306] Service resourcequota-7670/test-service updated: 0 ports\nI1007 16:38:33.949911       1 service.go:446] Removing service port \"resourcequota-7670/test-service\"\nI1007 16:38:33.950102       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:33.987054       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.059045ms\"\nI1007 16:38:34.101197       1 service.go:306] Service resourcequota-7670/test-service-np updated: 0 ports\nI1007 16:38:34.101236       1 service.go:446] Removing service port \"resourcequota-7670/test-service-np\"\nI1007 16:38:34.101365       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:34.150038       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.788884ms\"\nI1007 16:38:44.306606       1 service.go:306] Service dns-6187/dns-test-service-3 updated: 1 ports\nI1007 16:38:44.306873       1 service.go:421] Adding new service port \"dns-6187/dns-test-service-3:http\" at 100.64.10.139:80/TCP\nI1007 16:38:44.307111       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:44.342620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.74748ms\"\nI1007 16:38:47.616634       1 service.go:306] Service dns-6187/dns-test-service-3 updated: 0 ports\nI1007 16:38:47.616676       1 service.go:446] Removing service port \"dns-6187/dns-test-service-3:http\"\nI1007 16:38:47.616801       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:47.684450       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.710019ms\"\nI1007 16:38:51.539452       1 service.go:306] Service deployment-7410/test-rolling-update-with-lb updated: 1 ports\nI1007 16:38:51.539499       1 service.go:421] Adding new service port \"deployment-7410/test-rolling-update-with-lb\" at 100.69.22.153:80/TCP\nI1007 16:38:51.539941       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:51.575157       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for deployment-7410/test-rolling-update-with-lb\\\" (:30903/tcp4)\"\nI1007 16:38:51.581697       1 service_health.go:98] Opening healthcheck \"deployment-7410/test-rolling-update-with-lb\" on port 30586\nI1007 16:38:51.581796       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.297133ms\"\nI1007 16:38:51.582142       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:51.619949       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.104764ms\"\nI1007 16:38:53.828878       1 service.go:306] Service webhook-1074/e2e-test-webhook updated: 1 ports\nI1007 16:38:53.829103       1 service.go:421] Adding new service port \"webhook-1074/e2e-test-webhook\" at 100.71.33.39:8443/TCP\nI1007 16:38:53.829282       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:53.868164       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.074924ms\"\nI1007 16:38:53.868323       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:53.906865       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.658859ms\"\nE1007 16:38:54.346419       1 utils.go:282] Skipping invalid IP: \nI1007 16:38:58.733425       1 service.go:306] Service webhook-1074/e2e-test-webhook updated: 0 ports\nI1007 16:38:58.733470       1 service.go:446] Removing service port \"webhook-1074/e2e-test-webhook\"\nI1007 16:38:58.733592       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:58.769987       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.505528ms\"\nI1007 16:38:58.770161       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:38:58.807953       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.922376ms\"\nI1007 16:39:04.539196       1 service.go:306] Service ephemeral-4236-7951/csi-hostpathplugin updated: 1 ports\nI1007 16:39:04.539245       1 service.go:421] Adding new service port \"ephemeral-4236-7951/csi-hostpathplugin:dummy\" at 100.70.110.170:12345/TCP\nI1007 16:39:04.539349       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:04.578787       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.53658ms\"\nI1007 16:39:04.579022       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:04.616027       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.195702ms\"\nI1007 16:39:05.865599       1 service.go:306] Service ephemeral-9485-6661/csi-hostpathplugin updated: 0 ports\nI1007 16:39:05.865697       1 service.go:446] Removing service port \"ephemeral-9485-6661/csi-hostpathplugin:dummy\"\nI1007 16:39:05.865853       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:05.984706       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"118.99254ms\"\nI1007 16:39:06.987619       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:07.051411       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.207892ms\"\nI1007 16:39:09.677454       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:09.710249       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.873062ms\"\nI1007 16:39:10.330647       1 service.go:306] Service services-783/nodeport-collision-1 updated: 1 ports\nI1007 16:39:10.330703       1 service.go:421] Adding new service port \"services-783/nodeport-collision-1\" at 100.65.215.84:80/TCP\nI1007 16:39:10.330807       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:10.359231       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-783/nodeport-collision-1\\\" (:31436/tcp4)\"\nI1007 16:39:10.364614       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.90359ms\"\nI1007 16:39:10.651815       1 service.go:306] Service services-783/nodeport-collision-1 updated: 0 ports\nI1007 16:39:10.815324       1 service.go:306] Service services-783/nodeport-collision-2 updated: 1 ports\nI1007 16:39:10.815386       1 service.go:446] Removing service port \"services-783/nodeport-collision-1\"\nI1007 16:39:10.815410       1 service.go:421] Adding new service port \"services-783/nodeport-collision-2\" at 100.67.154.3:80/TCP\nI1007 16:39:10.815999       1 proxier.go:857] \"Syncing iptables rules\"\nE1007 16:39:10.854998       1 proxier.go:1289] \"can't open port, skipping this nodePort\" err=\"listen tcp4 :31436: bind: address already in use\" port=\"\\\"nodePort for services-783/nodeport-collision-2\\\" (:31436/tcp4)\"\nI1007 16:39:10.863166       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.781713ms\"\nI1007 16:39:10.961030       1 service.go:306] Service services-783/nodeport-collision-2 updated: 0 ports\nI1007 16:39:11.863460       1 service.go:446] Removing service port \"services-783/nodeport-collision-2\"\nI1007 16:39:11.863661       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:11.905166       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.709979ms\"\nI1007 16:39:21.980657       1 service.go:306] Service webhook-3689/e2e-test-webhook updated: 1 ports\nI1007 16:39:21.980716       1 service.go:421] Adding new service port \"webhook-3689/e2e-test-webhook\" at 100.67.136.213:8443/TCP\nI1007 16:39:21.980871       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:22.046516       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.797991ms\"\nI1007 16:39:22.046772       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:22.108351       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.792908ms\"\nI1007 16:39:25.181610       1 service.go:306] Service webhook-3689/e2e-test-webhook updated: 0 ports\nI1007 16:39:25.181648       1 service.go:446] Removing service port \"webhook-3689/e2e-test-webhook\"\nI1007 16:39:25.181786       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:25.231993       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.326956ms\"\nI1007 16:39:25.232208       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:25.300080       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.988027ms\"\nI1007 16:39:53.112753       1 service.go:306] Service crd-webhook-2998/e2e-test-crd-conversion-webhook updated: 1 ports\nI1007 16:39:53.112943       1 service.go:421] Adding new service port \"crd-webhook-2998/e2e-test-crd-conversion-webhook\" at 100.65.61.110:9443/TCP\nI1007 16:39:53.113131       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:53.159362       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.55618ms\"\nI1007 16:39:53.159582       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:53.200312       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.857194ms\"\nI1007 16:39:57.549326       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:57.642151       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"92.916519ms\"\nI1007 16:39:57.642862       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:57.688693       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.501726ms\"\nI1007 16:39:58.231644       1 service.go:306] Service crd-webhook-2998/e2e-test-crd-conversion-webhook updated: 0 ports\nI1007 16:39:58.689598       1 service.go:446] Removing service port \"crd-webhook-2998/e2e-test-crd-conversion-webhook\"\nI1007 16:39:58.689764       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:58.723894       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.298314ms\"\nI1007 16:39:59.724201       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:39:59.769750       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.711434ms\"\nI1007 16:40:01.143947       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:01.196879       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.037821ms\"\nI1007 16:40:02.200865       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:02.429689       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"229.015946ms\"\nI1007 16:40:07.381223       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:07.482177       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.117098ms\"\nI1007 16:40:07.482454       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:07.549349       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.129291ms\"\nI1007 16:40:09.364286       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:09.409352       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.139631ms\"\nI1007 16:40:09.409717       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:09.465350       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.931021ms\"\nI1007 16:40:09.921323       1 service.go:306] Service ephemeral-4236-7951/csi-hostpathplugin updated: 0 ports\nI1007 16:40:10.465492       1 service.go:446] Removing service port \"ephemeral-4236-7951/csi-hostpathplugin:dummy\"\nI1007 16:40:10.465666       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:10.503215       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.72537ms\"\nI1007 16:40:11.503525       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:11.549850       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.445436ms\"\nI1007 16:40:14.976979       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:15.051799       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.021096ms\"\nI1007 16:40:15.051974       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:15.121818       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.973936ms\"\nI1007 16:40:22.892617       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:22.927518       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.999748ms\"\nI1007 16:40:22.927692       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:22.970782       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.219681ms\"\nI1007 16:40:24.311181       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:24.354718       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.664644ms\"\nI1007 16:40:25.354937       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:25.395041       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.217564ms\"\nI1007 16:40:29.172087       1 service.go:306] Service webhook-72/e2e-test-webhook updated: 1 ports\nI1007 16:40:29.172130       1 service.go:421] Adding new service port \"webhook-72/e2e-test-webhook\" at 100.64.62.101:8443/TCP\nI1007 16:40:29.172246       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:29.235365       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.221585ms\"\nI1007 16:40:29.235778       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:29.287705       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.294198ms\"\nI1007 16:40:31.515785       1 service.go:306] Service webhook-72/e2e-test-webhook updated: 0 ports\nI1007 16:40:31.515824       1 service.go:446] Removing service port \"webhook-72/e2e-test-webhook\"\nI1007 16:40:31.515945       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:31.719863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"204.019524ms\"\nI1007 16:40:31.720014       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:31.830675       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"110.764175ms\"\nI1007 16:40:45.811992       1 service.go:306] Service services-7055/sourceip-test updated: 1 ports\nI1007 16:40:45.812051       1 service.go:421] Adding new service port \"services-7055/sourceip-test\" at 100.66.192.72:8080/TCP\nI1007 16:40:45.812170       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:45.856754       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.697327ms\"\nI1007 16:40:45.857091       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:45.910670       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.874816ms\"\nI1007 16:40:47.097255       1 service.go:306] Service ephemeral-1426-4285/csi-hostpathplugin updated: 1 ports\nI1007 16:40:47.097306       1 service.go:421] Adding new service port \"ephemeral-1426-4285/csi-hostpathplugin:dummy\" at 100.66.170.11:12345/TCP\nI1007 16:40:47.097410       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:47.152815       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.497278ms\"\nI1007 16:40:48.154142       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:48.205905       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.9101ms\"\nI1007 16:40:52.703312       1 service.go:306] Service volume-expand-6380-3934/csi-hostpathplugin updated: 1 ports\nI1007 16:40:52.703365       1 service.go:421] Adding new service port \"volume-expand-6380-3934/csi-hostpathplugin:dummy\" at 100.65.186.119:12345/TCP\nI1007 16:40:52.703484       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:52.740771       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.391461ms\"\nI1007 16:40:52.740940       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:52.775970       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.158469ms\"\nI1007 16:40:53.468559       1 service.go:306] Service volume-expand-5512-1099/csi-hostpathplugin updated: 1 ports\nI1007 16:40:53.777799       1 service.go:421] Adding new service port \"volume-expand-5512-1099/csi-hostpathplugin:dummy\" at 100.67.44.177:12345/TCP\nI1007 16:40:53.778024       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:40:53.814047       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.269897ms\"\nI1007 16:41:02.486381       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:02.585708       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.439977ms\"\nI1007 16:41:03.382553       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:03.418747       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.261191ms\"\nI1007 16:41:06.979858       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:07.020416       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.676442ms\"\nI1007 16:41:08.606534       1 service.go:306] Service webhook-1677/e2e-test-webhook updated: 1 ports\nI1007 16:41:08.606576       1 service.go:421] Adding new service port \"webhook-1677/e2e-test-webhook\" at 100.64.241.195:8443/TCP\nI1007 16:41:08.606705       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:08.655540       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.957265ms\"\nI1007 16:41:08.655729       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:08.707265       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.682909ms\"\nI1007 16:41:11.792995       1 service.go:306] Service webhook-1677/e2e-test-webhook updated: 0 ports\nI1007 16:41:11.793034       1 service.go:446] Removing service port \"webhook-1677/e2e-test-webhook\"\nI1007 16:41:11.793161       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:11.845208       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.156105ms\"\nI1007 16:41:11.845371       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:11.919344       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"74.074984ms\"\nI1007 16:41:13.293092       1 service.go:306] Service conntrack-1335/svc-udp updated: 1 ports\nI1007 16:41:13.293156       1 service.go:421] Adding new service port \"conntrack-1335/svc-udp:udp\" at 100.66.208.138:80/UDP\nI1007 16:41:13.293334       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:13.329115       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.956915ms\"\nI1007 16:41:14.309627       1 service.go:306] Service services-8024/up-down-1 updated: 1 ports\nI1007 16:41:14.309670       1 service.go:421] Adding new service port \"services-8024/up-down-1\" at 100.71.115.244:80/TCP\nI1007 16:41:14.309772       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:14.355704       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.027617ms\"\nI1007 16:41:15.356035       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:15.394200       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.283404ms\"\nI1007 16:41:17.150440       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:17.203070       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.725577ms\"\nI1007 16:41:17.510231       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:17.575215       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.076457ms\"\nI1007 16:41:17.903774       1 service.go:306] Service services-8024/up-down-2 updated: 1 ports\nI1007 16:41:18.359831       1 service.go:421] Adding new service port \"services-8024/up-down-2\" at 100.71.133.141:80/TCP\nI1007 16:41:18.360038       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-1335/svc-udp:udp\" clusterIP=\"100.66.208.138\"\nI1007 16:41:18.360064       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:18.417730       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.90959ms\"\nI1007 16:41:19.589350       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:19.625158       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.930405ms\"\nI1007 16:41:20.625547       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:20.665295       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.896782ms\"\nI1007 16:41:21.288187       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:21.334872       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.907186ms\"\nI1007 16:41:27.710755       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:27.771789       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.160105ms\"\nI1007 16:41:29.213950       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:29.258193       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.393172ms\"\nI1007 16:41:31.015826       1 proxier.go:857] \"Syncing iptables rules\"\nE1007 16:41:31.059951       1 utils.go:282] Skipping invalid IP: \nE1007 16:41:31.059992       1 utils.go:282] Skipping invalid IP: \nI1007 16:41:31.093522       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"77.844266ms\"\nI1007 16:41:31.093775       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:31.157586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.013406ms\"\nI1007 16:41:32.158603       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:32.197184       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.700775ms\"\nI1007 16:41:35.935130       1 service.go:306] Service services-3054/nodeport-service updated: 1 ports\nI1007 16:41:35.935176       1 service.go:421] Adding new service port \"services-3054/nodeport-service\" at 100.65.85.215:80/TCP\nI1007 16:41:35.935310       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:35.969261       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-3054/nodeport-service\\\" (:31318/tcp4)\"\nI1007 16:41:35.975147       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.965908ms\"\nI1007 16:41:35.975395       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:36.028863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.669278ms\"\nI1007 16:41:36.083464       1 service.go:306] Service services-3054/externalsvc updated: 1 ports\nI1007 16:41:37.029809       1 service.go:421] Adding new service port \"services-3054/externalsvc\" at 100.68.243.109:80/TCP\nI1007 16:41:37.029968       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:37.131814       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"102.021113ms\"\nI1007 16:41:38.132902       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:38.171858       1 service.go:306] Service volume-3265-2476/csi-hostpathplugin updated: 1 ports\nI1007 16:41:38.179653       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.922726ms\"\nI1007 16:41:39.179873       1 service.go:421] Adding new service port \"volume-3265-2476/csi-hostpathplugin:dummy\" at 100.65.135.186:12345/TCP\nI1007 16:41:39.180043       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:39.221145       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.287003ms\"\nI1007 16:41:39.810278       1 service.go:306] Service services-3054/nodeport-service updated: 0 ports\nI1007 16:41:40.221288       1 service.go:446] Removing service port \"services-3054/nodeport-service\"\nI1007 16:41:40.221469       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:40.265528       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.243633ms\"\nI1007 16:41:40.635280       1 service.go:306] Service conntrack-1335/svc-udp updated: 0 ports\nI1007 16:41:41.265857       1 service.go:446] Removing service port \"conntrack-1335/svc-udp:udp\"\nI1007 16:41:41.266115       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:41.334144       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.296881ms\"\nI1007 16:41:42.334896       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:42.372484       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.69591ms\"\nI1007 16:41:42.997291       1 service.go:306] Service volume-expand-5512-1099/csi-hostpathplugin updated: 0 ports\nI1007 16:41:42.997576       1 service.go:446] Removing service port \"volume-expand-5512-1099/csi-hostpathplugin:dummy\"\nI1007 16:41:42.998003       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:43.042361       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.777044ms\"\nI1007 16:41:44.042848       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:44.122050       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"79.355238ms\"\nI1007 16:41:45.122324       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:45.160454       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.260144ms\"\nE1007 16:41:52.790004       1 utils.go:282] Skipping invalid IP: \nI1007 16:41:52.790039       1 service.go:306] Service deployment-7410/test-rolling-update-with-lb updated: 0 ports\nI1007 16:41:52.790069       1 service.go:446] Removing service port \"deployment-7410/test-rolling-update-with-lb\"\nI1007 16:41:52.790194       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:52.832483       1 service_health.go:83] Closing healthcheck \"deployment-7410/test-rolling-update-with-lb\" on port 30586\nI1007 16:41:52.832556       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.48171ms\"\nI1007 16:41:55.175820       1 service.go:306] Service services-3054/externalsvc updated: 0 ports\nI1007 16:41:55.175859       1 service.go:446] Removing service port \"services-3054/externalsvc\"\nI1007 16:41:55.176060       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:55.215259       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.386492ms\"\nI1007 16:41:55.215491       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:41:55.249311       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.011534ms\"\nW1007 16:42:01.015563       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:42:18.429407       1 service.go:306] Service services-536/endpoint-test2 updated: 1 ports\nI1007 16:42:18.429454       1 service.go:421] Adding new service port \"services-536/endpoint-test2\" at 100.67.211.54:80/TCP\nI1007 16:42:18.429594       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:18.480651       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.1863ms\"\nI1007 16:42:18.480798       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:18.522705       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.014447ms\"\nW1007 16:42:21.190780       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingwpq6x\nW1007 16:42:21.335270       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbptv5\nW1007 16:42:21.479287       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqpmxf\nI1007 16:42:21.512339       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:21.604052       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"91.832897ms\"\nW1007 16:42:22.340289       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqpmxf\nW1007 16:42:22.625306       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqpmxf\nW1007 16:42:22.770058       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingqpmxf\nW1007 16:42:23.219516       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingbptv5\nW1007 16:42:23.221437       1 endpoints.go:261] Error getting endpoint slice cache keys: No kubernetes.io/service-name label set on endpoint slice: e2e-example-ingwpq6x\nI1007 16:42:25.994040       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:26.031255       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.290511ms\"\nI1007 16:42:26.580813       1 service.go:306] Service services-2777/affinity-clusterip-transition updated: 1 ports\nI1007 16:42:26.581225       1 service.go:421] Adding new service port \"services-2777/affinity-clusterip-transition\" at 100.70.82.255:80/TCP\nI1007 16:42:26.581317       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:26.626530       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.325767ms\"\nI1007 16:42:27.498622       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:27.535194       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.705674ms\"\nI1007 16:42:28.100247       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:28.156048       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.868866ms\"\nI1007 16:42:28.817428       1 service.go:306] Service services-536/endpoint-test2 updated: 0 ports\nI1007 16:42:29.156890       1 service.go:446] Removing service port \"services-536/endpoint-test2\"\nI1007 16:42:29.157122       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:29.198596       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.708096ms\"\nI1007 16:42:29.682839       1 service.go:306] Service webhook-8318/e2e-test-webhook updated: 1 ports\nI1007 16:42:30.075040       1 service.go:306] Service ephemeral-3090-705/csi-hostpathplugin updated: 1 ports\nI1007 16:42:30.075087       1 service.go:421] Adding new service port \"webhook-8318/e2e-test-webhook\" at 100.71.143.121:8443/TCP\nI1007 16:42:30.075103       1 service.go:421] Adding new service port \"ephemeral-3090-705/csi-hostpathplugin:dummy\" at 100.66.238.91:12345/TCP\nI1007 16:42:30.075241       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:30.110694       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.603102ms\"\nI1007 16:42:31.110983       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:31.186891       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.520995ms\"\nI1007 16:42:35.640196       1 service.go:306] Service webhook-8318/e2e-test-webhook updated: 0 ports\nI1007 16:42:35.640240       1 service.go:446] Removing service port \"webhook-8318/e2e-test-webhook\"\nI1007 16:42:35.640408       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:35.692054       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.80124ms\"\nI1007 16:42:35.692191       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:35.770402       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.300758ms\"\nI1007 16:42:39.218039       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:39.271400       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.454855ms\"\nI1007 16:42:41.772929       1 service.go:306] Service volume-3265-2476/csi-hostpathplugin updated: 0 ports\nI1007 16:42:41.772969       1 service.go:446] Removing service port \"volume-3265-2476/csi-hostpathplugin:dummy\"\nI1007 16:42:41.773126       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:41.817061       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.064586ms\"\nI1007 16:42:41.817320       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:41.870160       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.040318ms\"\nI1007 16:42:44.877017       1 service.go:306] Service services-2777/affinity-clusterip-transition updated: 1 ports\nI1007 16:42:44.877074       1 service.go:423] Updating existing service port \"services-2777/affinity-clusterip-transition\" at 100.70.82.255:80/TCP\nI1007 16:42:44.877258       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:44.953563       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.483621ms\"\nI1007 16:42:51.723982       1 service.go:306] Service ephemeral-1426-4285/csi-hostpathplugin updated: 0 ports\nI1007 16:42:51.724022       1 service.go:446] Removing service port \"ephemeral-1426-4285/csi-hostpathplugin:dummy\"\nI1007 16:42:51.724160       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:51.784263       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.222107ms\"\nI1007 16:42:51.784436       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:51.846450       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.14654ms\"\nI1007 16:42:54.395771       1 service.go:306] Service provisioning-9817-1575/csi-hostpathplugin updated: 1 ports\nI1007 16:42:54.395837       1 service.go:421] Adding new service port \"provisioning-9817-1575/csi-hostpathplugin:dummy\" at 100.69.60.233:12345/TCP\nI1007 16:42:54.396031       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:54.447867       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.026542ms\"\nI1007 16:42:54.448213       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:42:54.521728       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.742451ms\"\nI1007 16:43:01.161713       1 service.go:306] Service volume-expand-6380-3934/csi-hostpathplugin updated: 0 ports\nI1007 16:43:01.161769       1 service.go:446] Removing service port \"volume-expand-6380-3934/csi-hostpathplugin:dummy\"\nI1007 16:43:01.161928       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:01.224208       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.421931ms\"\nI1007 16:43:01.224419       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:01.291975       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.720295ms\"\nI1007 16:43:02.815219       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:02.866708       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.567475ms\"\nI1007 16:43:05.577638       1 service.go:306] Service provisioning-2488-9324/csi-hostpathplugin updated: 1 ports\nI1007 16:43:05.577681       1 service.go:421] Adding new service port \"provisioning-2488-9324/csi-hostpathplugin:dummy\" at 100.65.204.98:12345/TCP\nI1007 16:43:05.578362       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:05.664182       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.480579ms\"\nI1007 16:43:05.664516       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:05.711415       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.180336ms\"\nI1007 16:43:07.319026       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:07.362284       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.348209ms\"\nI1007 16:43:26.225133       1 service.go:306] Service services-3197/affinity-nodeport-transition updated: 1 ports\nI1007 16:43:26.225183       1 service.go:421] Adding new service port \"services-3197/affinity-nodeport-transition\" at 100.67.198.81:80/TCP\nI1007 16:43:26.225312       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:26.257564       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-3197/affinity-nodeport-transition\\\" (:31950/tcp4)\"\nI1007 16:43:26.265696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.506576ms\"\nI1007 16:43:26.265931       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:26.305005       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.259607ms\"\nI1007 16:43:28.046346       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:28.116301       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.119997ms\"\nI1007 16:43:28.397062       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:28.431596       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.665173ms\"\nI1007 16:43:29.431898       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:29.478506       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.738712ms\"\nI1007 16:43:29.869589       1 service.go:306] Service services-7055/sourceip-test updated: 0 ports\nI1007 16:43:30.479417       1 service.go:446] Removing service port \"services-7055/sourceip-test\"\nI1007 16:43:30.479556       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:30.516506       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.092216ms\"\nI1007 16:43:39.614965       1 service.go:306] Service ephemeral-3090-705/csi-hostpathplugin updated: 0 ports\nI1007 16:43:39.615007       1 service.go:446] Removing service port \"ephemeral-3090-705/csi-hostpathplugin:dummy\"\nI1007 16:43:39.615121       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:39.651166       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.14847ms\"\nI1007 16:43:39.651351       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:39.691042       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.832168ms\"\nI1007 16:43:39.813826       1 service.go:306] Service services-3197/affinity-nodeport-transition updated: 1 ports\nI1007 16:43:40.691237       1 service.go:423] Updating existing service port \"services-3197/affinity-nodeport-transition\" at 100.67.198.81:80/TCP\nI1007 16:43:40.691398       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:40.732941       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.741968ms\"\nI1007 16:43:41.667696       1 service.go:306] Service services-3197/affinity-nodeport-transition updated: 1 ports\nI1007 16:43:41.667741       1 service.go:423] Updating existing service port \"services-3197/affinity-nodeport-transition\" at 100.67.198.81:80/TCP\nI1007 16:43:41.667866       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:41.720607       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.858055ms\"\nI1007 16:43:43.920167       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:43.960516       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.488126ms\"\nI1007 16:43:43.961179       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:44.005913       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.325672ms\"\nI1007 16:43:46.576403       1 service.go:306] Service provisioning-2488-9324/csi-hostpathplugin updated: 0 ports\nI1007 16:43:46.576477       1 service.go:446] Removing service port \"provisioning-2488-9324/csi-hostpathplugin:dummy\"\nI1007 16:43:46.576683       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:46.673035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.576473ms\"\nI1007 16:43:46.673257       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:46.774463       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.385414ms\"\nI1007 16:43:49.093908       1 service.go:306] Service services-4755/tolerate-unready updated: 1 ports\nI1007 16:43:49.093959       1 service.go:421] Adding new service port \"services-4755/tolerate-unready:http\" at 100.69.17.239:80/TCP\nI1007 16:43:49.094083       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:49.136168       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.203321ms\"\nI1007 16:43:49.136368       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:49.175364       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.155596ms\"\nI1007 16:43:51.203411       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:51.250572       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.16489ms\"\nI1007 16:43:57.237918       1 service.go:306] Service services-3197/affinity-nodeport-transition updated: 0 ports\nI1007 16:43:57.237963       1 service.go:446] Removing service port \"services-3197/affinity-nodeport-transition\"\nI1007 16:43:57.238089       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:57.362770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"124.794802ms\"\nI1007 16:43:57.363018       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:43:57.452734       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"89.894335ms\"\nI1007 16:44:02.516794       1 service.go:306] Service crd-webhook-7862/e2e-test-crd-conversion-webhook updated: 1 ports\nI1007 16:44:02.521409       1 service.go:421] Adding new service port \"crd-webhook-7862/e2e-test-crd-conversion-webhook\" at 100.66.135.171:9443/TCP\nI1007 16:44:02.521553       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:02.559145       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.751992ms\"\nI1007 16:44:02.559395       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:02.593683       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.496109ms\"\nI1007 16:44:03.536965       1 service.go:306] Service provisioning-9817-1575/csi-hostpathplugin updated: 0 ports\nI1007 16:44:03.537004       1 service.go:446] Removing service port \"provisioning-9817-1575/csi-hostpathplugin:dummy\"\nI1007 16:44:03.537437       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:03.621231       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"84.210608ms\"\nI1007 16:44:04.537660       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:04.573365       1 service.go:306] Service services-7878/service-proxy-toggled updated: 0 ports\nI1007 16:44:04.625102       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.579863ms\"\nI1007 16:44:05.625959       1 service.go:446] Removing service port \"services-7878/service-proxy-toggled\"\nI1007 16:44:05.626133       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:05.675774       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.816406ms\"\nI1007 16:44:08.413426       1 service.go:306] Service crd-webhook-7862/e2e-test-crd-conversion-webhook updated: 0 ports\nI1007 16:44:08.413468       1 service.go:446] Removing service port \"crd-webhook-7862/e2e-test-crd-conversion-webhook\"\nI1007 16:44:08.413594       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:08.495808       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.329184ms\"\nI1007 16:44:08.496073       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:08.535863       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.001587ms\"\nI1007 16:44:09.398621       1 service.go:306] Service volumemode-7203-7465/csi-hostpathplugin updated: 1 ports\nI1007 16:44:09.536743       1 service.go:421] Adding new service port \"volumemode-7203-7465/csi-hostpathplugin:dummy\" at 100.69.141.240:12345/TCP\nI1007 16:44:09.536907       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:09.576406       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.695665ms\"\nI1007 16:44:15.260226       1 service.go:306] Service webhook-9382/e2e-test-webhook updated: 1 ports\nI1007 16:44:15.260270       1 service.go:421] Adding new service port \"webhook-9382/e2e-test-webhook\" at 100.70.7.123:8443/TCP\nI1007 16:44:15.260390       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:15.309532       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.257756ms\"\nI1007 16:44:15.309698       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:15.372204       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.623557ms\"\nI1007 16:44:18.506839       1 service.go:306] Service ephemeral-1736-2110/csi-hostpathplugin updated: 1 ports\nI1007 16:44:18.506886       1 service.go:421] Adding new service port \"ephemeral-1736-2110/csi-hostpathplugin:dummy\" at 100.70.155.243:12345/TCP\nI1007 16:44:18.507023       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:18.577608       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.715275ms\"\nI1007 16:44:18.577758       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:18.650885       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.234725ms\"\nI1007 16:44:22.347467       1 service.go:306] Service apply-8853/test-svc updated: 1 ports\nI1007 16:44:22.347512       1 service.go:421] Adding new service port \"apply-8853/test-svc\" at 100.71.9.112:8080/UDP\nI1007 16:44:22.347638       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:22.391063       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.543936ms\"\nI1007 16:44:24.389455       1 service.go:306] Service services-1911/clusterip-service updated: 1 ports\nI1007 16:44:24.389505       1 service.go:421] Adding new service port \"services-1911/clusterip-service\" at 100.69.230.139:80/TCP\nI1007 16:44:24.389611       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:24.428580       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.068368ms\"\nI1007 16:44:24.428723       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:24.490998       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.377111ms\"\nI1007 16:44:24.536043       1 service.go:306] Service services-1911/externalsvc updated: 1 ports\nI1007 16:44:25.491798       1 service.go:421] Adding new service port \"services-1911/externalsvc\" at 100.67.145.44:80/TCP\nI1007 16:44:25.492039       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:25.527513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.74774ms\"\nI1007 16:44:26.528481       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:26.570950       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.567038ms\"\nI1007 16:44:27.571915       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:27.615648       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.961344ms\"\nI1007 16:44:27.993460       1 service.go:306] Service apply-8853/test-svc updated: 0 ports\nI1007 16:44:28.267803       1 service.go:306] Service services-1911/clusterip-service updated: 0 ports\nI1007 16:44:28.616560       1 service.go:446] Removing service port \"apply-8853/test-svc\"\nI1007 16:44:28.616697       1 service.go:446] Removing service port \"services-1911/clusterip-service\"\nI1007 16:44:28.616918       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:28.685016       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.458375ms\"\nI1007 16:44:31.749163       1 service.go:306] Service provisioning-6279-4346/csi-hostpathplugin updated: 1 ports\nI1007 16:44:31.749210       1 service.go:421] Adding new service port \"provisioning-6279-4346/csi-hostpathplugin:dummy\" at 100.69.116.105:12345/TCP\nI1007 16:44:31.749315       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:31.788356       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.129422ms\"\nI1007 16:44:31.788728       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:31.796220       1 service.go:306] Service webhook-9382/e2e-test-webhook updated: 0 ports\nI1007 16:44:31.829541       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.073873ms\"\nI1007 16:44:32.522938       1 service.go:306] Service ephemeral-1838-4411/csi-hostpathplugin updated: 1 ports\nI1007 16:44:32.830568       1 service.go:446] Removing service port \"webhook-9382/e2e-test-webhook\"\nI1007 16:44:32.830617       1 service.go:421] Adding new service port \"ephemeral-1838-4411/csi-hostpathplugin:dummy\" at 100.68.71.214:12345/TCP\nI1007 16:44:32.830768       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:32.907025       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.454828ms\"\nI1007 16:44:37.749707       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:37.827726       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.108715ms\"\nI1007 16:44:38.368655       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:38.418349       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.869708ms\"\nI1007 16:44:39.968238       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:40.183769       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"215.619222ms\"\nI1007 16:44:46.396740       1 service.go:306] Service services-2777/affinity-clusterip-transition updated: 1 ports\nI1007 16:44:46.396785       1 service.go:423] Updating existing service port \"services-2777/affinity-clusterip-transition\" at 100.70.82.255:80/TCP\nI1007 16:44:46.396932       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:46.433586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.795581ms\"\nI1007 16:44:48.608461       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:48.645392       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.07063ms\"\nI1007 16:44:48.645647       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:44:48.684791       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.358219ms\"\nI1007 16:45:04.528514       1 service.go:306] Service services-2777/affinity-clusterip-transition updated: 0 ports\nI1007 16:45:04.528554       1 service.go:446] Removing service port \"services-2777/affinity-clusterip-transition\"\nI1007 16:45:04.528657       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:04.594580       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.013477ms\"\nI1007 16:45:04.594772       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:04.672942       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.31499ms\"\nI1007 16:45:06.062456       1 service.go:306] Service volumemode-7203-7465/csi-hostpathplugin updated: 0 ports\nI1007 16:45:06.062527       1 service.go:446] Removing service port \"volumemode-7203-7465/csi-hostpathplugin:dummy\"\nI1007 16:45:06.062654       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:06.145669       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.11665ms\"\nI1007 16:45:07.146703       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:07.211791       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.236874ms\"\nI1007 16:45:13.208336       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:13.258637       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.518856ms\"\nI1007 16:45:14.207954       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:14.281304       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.432351ms\"\nI1007 16:45:25.203092       1 service.go:306] Service services-1911/externalsvc updated: 0 ports\nI1007 16:45:25.203130       1 service.go:446] Removing service port \"services-1911/externalsvc\"\nI1007 16:45:25.203258       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:25.250458       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.317113ms\"\nI1007 16:45:25.250653       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:25.284740       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.23949ms\"\nI1007 16:45:30.235892       1 service.go:306] Service webhook-6991/e2e-test-webhook updated: 1 ports\nI1007 16:45:30.235942       1 service.go:421] Adding new service port \"webhook-6991/e2e-test-webhook\" at 100.68.213.23:8443/TCP\nI1007 16:45:30.236074       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:30.248734       1 service.go:306] Service provisioning-6279-4346/csi-hostpathplugin updated: 0 ports\nI1007 16:45:30.371033       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"135.084301ms\"\nI1007 16:45:30.371067       1 service.go:446] Removing service port \"provisioning-6279-4346/csi-hostpathplugin:dummy\"\nI1007 16:45:30.371238       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:30.478422       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"106.511894ms\"\nI1007 16:45:33.782562       1 service.go:306] Service webhook-6991/e2e-test-webhook updated: 0 ports\nI1007 16:45:33.782606       1 service.go:446] Removing service port \"webhook-6991/e2e-test-webhook\"\nI1007 16:45:33.782755       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:33.821148       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.532868ms\"\nI1007 16:45:33.821407       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:33.854134       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.947402ms\"\nI1007 16:45:53.508119       1 service.go:306] Service provisioning-9827-3200/csi-hostpathplugin updated: 1 ports\nI1007 16:45:53.508168       1 service.go:421] Adding new service port \"provisioning-9827-3200/csi-hostpathplugin:dummy\" at 100.68.46.2:12345/TCP\nI1007 16:45:53.508294       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:53.543423       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.248311ms\"\nI1007 16:45:53.543662       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:53.577936       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.47257ms\"\nI1007 16:45:58.497184       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:45:58.602107       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"104.996884ms\"\nI1007 16:46:10.072554       1 service.go:306] Service ephemeral-1736-2110/csi-hostpathplugin updated: 0 ports\nI1007 16:46:10.072597       1 service.go:446] Removing service port \"ephemeral-1736-2110/csi-hostpathplugin:dummy\"\nI1007 16:46:10.072734       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:10.112458       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.850102ms\"\nI1007 16:46:10.112740       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:10.159718       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.217016ms\"\nI1007 16:46:19.328893       1 service.go:306] Service volume-3394-573/csi-hostpathplugin updated: 1 ports\nI1007 16:46:19.328933       1 service.go:421] Adding new service port \"volume-3394-573/csi-hostpathplugin:dummy\" at 100.65.40.9:12345/TCP\nI1007 16:46:19.329103       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:19.392221       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.271441ms\"\nI1007 16:46:19.392644       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:19.451279       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.916811ms\"\nI1007 16:46:21.869426       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:21.904953       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.614657ms\"\nI1007 16:46:22.164382       1 service.go:306] Service services-1774/externalname-service updated: 1 ports\nI1007 16:46:22.164457       1 service.go:421] Adding new service port \"services-1774/externalname-service:http\" at 100.65.133.96:80/TCP\nI1007 16:46:22.164620       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:22.199067       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-1774/externalname-service:http\\\" (:31623/tcp4)\"\nI1007 16:46:22.204384       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.922828ms\"\nI1007 16:46:23.205419       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:23.245231       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.916403ms\"\nI1007 16:46:24.083366       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:24.119829       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.558809ms\"\nI1007 16:46:27.210754       1 service.go:306] Service ephemeral-1838-4411/csi-hostpathplugin updated: 0 ports\nI1007 16:46:27.210793       1 service.go:446] Removing service port \"ephemeral-1838-4411/csi-hostpathplugin:dummy\"\nI1007 16:46:27.210931       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:27.249347       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.528806ms\"\nI1007 16:46:27.249601       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:27.301452       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.060791ms\"\nI1007 16:46:28.550759       1 service.go:306] Service services-2910/externalname-service updated: 1 ports\nI1007 16:46:28.550808       1 service.go:421] Adding new service port \"services-2910/externalname-service:http\" at 100.66.91.113:80/TCP\nI1007 16:46:28.550910       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:28.631925       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"81.110481ms\"\nI1007 16:46:29.632152       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:29.679259       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.257117ms\"\nI1007 16:46:30.679545       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:30.715231       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.823308ms\"\nI1007 16:46:34.738390       1 service.go:306] Service provisioning-9827-3200/csi-hostpathplugin updated: 0 ports\nI1007 16:46:34.738445       1 service.go:446] Removing service port \"provisioning-9827-3200/csi-hostpathplugin:dummy\"\nI1007 16:46:34.738599       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:34.787791       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.347312ms\"\nI1007 16:46:34.787960       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:34.829270       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.434496ms\"\nI1007 16:46:41.859340       1 service.go:306] Service services-9857/affinity-nodeport-timeout updated: 1 ports\nI1007 16:46:41.859391       1 service.go:421] Adding new service port \"services-9857/affinity-nodeport-timeout\" at 100.64.79.105:80/TCP\nI1007 16:46:41.859517       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:41.921576       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-9857/affinity-nodeport-timeout\\\" (:31045/tcp4)\"\nI1007 16:46:41.948846       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"89.422298ms\"\nI1007 16:46:41.949216       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:42.126476       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"177.583786ms\"\nI1007 16:46:43.187638       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:43.222275       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.772108ms\"\nI1007 16:46:44.223450       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:44.261785       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.481049ms\"\nI1007 16:46:44.854156       1 service.go:306] Service services-2910/externalname-service updated: 0 ports\nI1007 16:46:44.881682       1 service.go:446] Removing service port \"services-2910/externalname-service:http\"\nI1007 16:46:44.881840       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:44.925257       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.568489ms\"\nI1007 16:46:57.440073       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:57.481502       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.591461ms\"\nI1007 16:46:57.481740       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:57.521745       1 service.go:306] Service services-8024/up-down-1 updated: 0 ports\nI1007 16:46:57.533829       1 service.go:306] Service services-8024/up-down-2 updated: 0 ports\nI1007 16:46:57.540020       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.477902ms\"\nI1007 16:46:58.540196       1 service.go:446] Removing service port \"services-8024/up-down-1\"\nI1007 16:46:58.540238       1 service.go:446] Removing service port \"services-8024/up-down-2\"\nI1007 16:46:58.540363       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:46:58.627361       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.167395ms\"\nI1007 16:47:00.502213       1 service.go:306] Service services-1774/externalname-service updated: 0 ports\nI1007 16:47:00.502256       1 service.go:446] Removing service port \"services-1774/externalname-service:http\"\nI1007 16:47:00.502383       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:00.592481       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.206448ms\"\nI1007 16:47:00.592930       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:00.635794       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.259684ms\"\nI1007 16:47:14.464148       1 service.go:306] Service conntrack-3329/boom-server updated: 1 ports\nI1007 16:47:14.464197       1 service.go:421] Adding new service port \"conntrack-3329/boom-server\" at 100.70.58.210:9000/TCP\nI1007 16:47:14.464300       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:14.502996       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.792825ms\"\nI1007 16:47:14.503463       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:14.546162       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.124356ms\"\nI1007 16:47:27.719158       1 service.go:306] Service volume-3394-573/csi-hostpathplugin updated: 0 ports\nI1007 16:47:27.719212       1 service.go:446] Removing service port \"volume-3394-573/csi-hostpathplugin:dummy\"\nI1007 16:47:27.719342       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:27.761935       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.709056ms\"\nI1007 16:47:27.762187       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:27.821083       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.10319ms\"\nI1007 16:47:48.850436       1 service.go:306] Service kubectl-3365/agnhost-primary updated: 1 ports\nI1007 16:47:48.850485       1 service.go:421] Adding new service port \"kubectl-3365/agnhost-primary\" at 100.64.231.181:6379/TCP\nI1007 16:47:48.850616       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:48.904363       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.868948ms\"\nI1007 16:47:48.904589       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:48.971397       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.994434ms\"\nW1007 16:47:50.017652       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:47:56.175189       1 service.go:306] Service kubectl-3365/agnhost-primary updated: 0 ports\nI1007 16:47:56.175234       1 service.go:446] Removing service port \"kubectl-3365/agnhost-primary\"\nI1007 16:47:56.175361       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:56.214282       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.034913ms\"\nI1007 16:47:56.214495       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:47:56.250999       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.677928ms\"\nI1007 16:48:10.853886       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:10.896975       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.175063ms\"\nI1007 16:48:10.897188       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:10.935840       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.820916ms\"\nI1007 16:48:16.088860       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:16.226965       1 service.go:306] Service dns-2292/test-service-2 updated: 0 ports\nI1007 16:48:16.274075       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"185.336363ms\"\nI1007 16:48:16.274112       1 service.go:446] Removing service port \"dns-2292/test-service-2:http\"\nI1007 16:48:16.274393       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:16.336049       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.91629ms\"\nI1007 16:48:20.214134       1 service.go:306] Service services-1840/hairpin-test updated: 1 ports\nI1007 16:48:20.214185       1 service.go:421] Adding new service port \"services-1840/hairpin-test\" at 100.70.254.218:8080/TCP\nI1007 16:48:20.214309       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:20.297807       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.609982ms\"\nI1007 16:48:20.297961       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:20.385898       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.042896ms\"\nI1007 16:48:21.546773       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:21.597874       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.180504ms\"\nI1007 16:48:22.968039       1 service.go:306] Service conntrack-3329/boom-server updated: 0 ports\nI1007 16:48:22.968078       1 service.go:446] Removing service port \"conntrack-3329/boom-server\"\nI1007 16:48:22.968200       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:23.038594       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.505377ms\"\nI1007 16:48:24.038816       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:24.084633       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.901606ms\"\nI1007 16:48:26.280754       1 service.go:306] Service services-9857/affinity-nodeport-timeout updated: 0 ports\nI1007 16:48:26.280790       1 service.go:446] Removing service port \"services-9857/affinity-nodeport-timeout\"\nI1007 16:48:26.280900       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:26.317936       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.135594ms\"\nI1007 16:48:26.318108       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:26.349887       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.909185ms\"\nI1007 16:48:32.635974       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:32.646023       1 service.go:306] Service services-1840/hairpin-test updated: 0 ports\nI1007 16:48:32.690931       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.038053ms\"\nI1007 16:48:32.690966       1 service.go:446] Removing service port \"services-1840/hairpin-test\"\nI1007 16:48:32.691235       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:32.725586       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.611875ms\"\nI1007 16:48:36.322881       1 service.go:306] Service services-217/externalip-test updated: 1 ports\nI1007 16:48:36.322923       1 service.go:421] Adding new service port \"services-217/externalip-test:http\" at 100.65.22.59:80/TCP\nI1007 16:48:36.323048       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:36.396808       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.873452ms\"\nI1007 16:48:36.397101       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:36.447414       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.543118ms\"\nI1007 16:48:38.438360       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:38.472885       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.613216ms\"\nI1007 16:48:39.149215       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:39.188500       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.369228ms\"\nI1007 16:48:42.397452       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:42.444576       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.245638ms\"\nI1007 16:48:42.538215       1 service.go:306] Service dns-262/test-service-2 updated: 0 ports\nI1007 16:48:42.538251       1 service.go:446] Removing service port \"dns-262/test-service-2:http\"\nI1007 16:48:42.538373       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:42.600331       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.063499ms\"\nI1007 16:48:43.600977       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:43.632889       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.004275ms\"\nI1007 16:48:45.684430       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:45.733072       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.818654ms\"\nI1007 16:48:52.465901       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:48:52.510001       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.188929ms\"\nI1007 16:49:23.010464       1 service.go:306] Service volumemode-4802-7028/csi-hostpathplugin updated: 1 ports\nI1007 16:49:23.010510       1 service.go:421] Adding new service port \"volumemode-4802-7028/csi-hostpathplugin:dummy\" at 100.68.177.90:12345/TCP\nI1007 16:49:23.010632       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:23.049497       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.980564ms\"\nI1007 16:49:23.049708       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:23.084997       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.44656ms\"\nI1007 16:49:28.250847       1 service.go:306] Service webhook-6514/e2e-test-webhook updated: 1 ports\nI1007 16:49:28.250896       1 service.go:421] Adding new service port \"webhook-6514/e2e-test-webhook\" at 100.65.225.68:8443/TCP\nI1007 16:49:28.251214       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:28.318418       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.518627ms\"\nI1007 16:49:28.318717       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:28.360435       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.96996ms\"\nI1007 16:49:29.801411       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:29.839784       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.446083ms\"\nI1007 16:49:31.978677       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:32.101733       1 service.go:306] Service services-217/externalip-test updated: 0 ports\nI1007 16:49:32.218178       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"239.609715ms\"\nI1007 16:49:32.218211       1 service.go:446] Removing service port \"services-217/externalip-test:http\"\nI1007 16:49:32.218331       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:32.267065       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.836957ms\"\nI1007 16:49:33.287201       1 service.go:306] Service ephemeral-2117-5082/csi-hostpathplugin updated: 1 ports\nI1007 16:49:33.287255       1 service.go:421] Adding new service port \"ephemeral-2117-5082/csi-hostpathplugin:dummy\" at 100.65.189.165:12345/TCP\nI1007 16:49:33.287380       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:33.320375       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.118446ms\"\nI1007 16:49:33.907389       1 service.go:306] Service webhook-6514/e2e-test-webhook updated: 0 ports\nI1007 16:49:34.320527       1 service.go:446] Removing service port \"webhook-6514/e2e-test-webhook\"\nI1007 16:49:34.320708       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:34.370074       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.541576ms\"\nI1007 16:49:41.663173       1 service.go:306] Service endpointslice-3787/example-empty-selector updated: 1 ports\nI1007 16:49:41.663221       1 service.go:421] Adding new service port \"endpointslice-3787/example-empty-selector:example\" at 100.71.42.85:80/TCP\nI1007 16:49:41.663342       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:41.698859       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.630436ms\"\nI1007 16:49:41.699072       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:41.734244       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.342359ms\"\nI1007 16:49:42.116726       1 service.go:306] Service endpointslice-3787/example-empty-selector updated: 0 ports\nI1007 16:49:42.734394       1 service.go:446] Removing service port \"endpointslice-3787/example-empty-selector:example\"\nI1007 16:49:42.734573       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:42.772980       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.572364ms\"\nI1007 16:49:46.253147       1 service.go:306] Service provisioning-5424-2566/csi-hostpathplugin updated: 1 ports\nI1007 16:49:46.253192       1 service.go:421] Adding new service port \"provisioning-5424-2566/csi-hostpathplugin:dummy\" at 100.66.223.44:12345/TCP\nI1007 16:49:46.253314       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:46.303255       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.048359ms\"\nI1007 16:49:46.303413       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:46.355181       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.872377ms\"\nI1007 16:49:58.046009       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:49:58.128790       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.945878ms\"\nI1007 16:50:00.447573       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:00.485291       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.783719ms\"\nI1007 16:50:06.428911       1 service.go:306] Service volumemode-4802-7028/csi-hostpathplugin updated: 0 ports\nI1007 16:50:06.428952       1 service.go:446] Removing service port \"volumemode-4802-7028/csi-hostpathplugin:dummy\"\nI1007 16:50:06.429080       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:06.466067       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.104678ms\"\nI1007 16:50:06.466270       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:06.540017       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.844923ms\"\nI1007 16:50:07.127714       1 service.go:306] Service conntrack-4290/svc-udp updated: 1 ports\nI1007 16:50:07.540654       1 service.go:421] Adding new service port \"conntrack-4290/svc-udp:udp\" at 100.66.98.61:80/UDP\nI1007 16:50:07.540786       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:07.573693       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for conntrack-4290/svc-udp:udp\\\" (:31732/udp4)\"\nI1007 16:50:07.579011       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.404303ms\"\nI1007 16:50:18.691217       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-4290/svc-udp:udp\" clusterIP=\"100.66.98.61\"\nI1007 16:50:18.691295       1 proxier.go:851] Stale udp service NodePort conntrack-4290/svc-udp:udp -> 31732\nI1007 16:50:18.691355       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:18.744771       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.675807ms\"\nI1007 16:50:29.631872       1 service.go:306] Service provisioning-5424-2566/csi-hostpathplugin updated: 0 ports\nI1007 16:50:29.631920       1 service.go:446] Removing service port \"provisioning-5424-2566/csi-hostpathplugin:dummy\"\nI1007 16:50:29.632047       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:29.688315       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.38282ms\"\nI1007 16:50:29.688450       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:29.771931       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"83.564688ms\"\nI1007 16:50:32.398584       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:32.440287       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.787742ms\"\nI1007 16:50:34.059135       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:34.122377       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.371706ms\"\nI1007 16:50:36.561241       1 service.go:306] Service services-3374/affinity-nodeport updated: 1 ports\nI1007 16:50:36.561290       1 service.go:421] Adding new service port \"services-3374/affinity-nodeport\" at 100.71.3.67:80/TCP\nI1007 16:50:36.561410       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:36.607077       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-3374/affinity-nodeport\\\" (:30817/tcp4)\"\nI1007 16:50:36.617615       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.318024ms\"\nI1007 16:50:36.617849       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:36.678587       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.928535ms\"\nI1007 16:50:38.200716       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:38.260345       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.707339ms\"\nI1007 16:50:38.720373       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:38.760470       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.182556ms\"\nI1007 16:50:39.760776       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:39.937222       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"176.574264ms\"\nI1007 16:50:46.922855       1 service.go:306] Service ephemeral-2117-5082/csi-hostpathplugin updated: 0 ports\nI1007 16:50:46.922893       1 service.go:446] Removing service port \"ephemeral-2117-5082/csi-hostpathplugin:dummy\"\nI1007 16:50:46.923009       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:47.024055       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"101.150272ms\"\nI1007 16:50:47.033991       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:47.066036       1 service.go:306] Service provisioning-3701-4465/csi-hostpathplugin updated: 1 ports\nI1007 16:50:47.102339       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.433084ms\"\nI1007 16:50:48.102519       1 service.go:421] Adding new service port \"provisioning-3701-4465/csi-hostpathplugin:dummy\" at 100.68.68.220:12345/TCP\nI1007 16:50:48.102665       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:48.154569       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.084535ms\"\nI1007 16:50:50.405622       1 service.go:306] Service conntrack-4290/svc-udp updated: 0 ports\nI1007 16:50:50.405669       1 service.go:446] Removing service port \"conntrack-4290/svc-udp:udp\"\nI1007 16:50:50.405790       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:50.444002       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.326059ms\"\nI1007 16:50:50.444166       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:50.480206       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.145442ms\"\nI1007 16:50:51.984242       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:52.030479       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.323802ms\"\nI1007 16:50:53.030751       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:53.069836       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.183269ms\"\nI1007 16:50:54.663821       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:54.699616       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.869051ms\"\nI1007 16:50:57.777409       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:57.819289       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.938942ms\"\nI1007 16:50:57.825150       1 service.go:306] Service services-3374/affinity-nodeport updated: 0 ports\nI1007 16:50:57.825187       1 service.go:446] Removing service port \"services-3374/affinity-nodeport\"\nI1007 16:50:57.825390       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:57.864648       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.450344ms\"\nI1007 16:50:57.979406       1 service.go:306] Service webhook-1096/e2e-test-webhook updated: 1 ports\nI1007 16:50:58.865506       1 service.go:421] Adding new service port \"webhook-1096/e2e-test-webhook\" at 100.69.175.66:8443/TCP\nI1007 16:50:58.865726       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:50:58.917681       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.194958ms\"\nI1007 16:51:00.325770       1 service.go:306] Service webhook-1096/e2e-test-webhook updated: 0 ports\nI1007 16:51:00.325812       1 service.go:446] Removing service port \"webhook-1096/e2e-test-webhook\"\nI1007 16:51:00.325934       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:00.392753       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.923831ms\"\nI1007 16:51:01.393046       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:01.543351       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"150.409503ms\"\nI1007 16:51:02.886810       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:02.947199       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.517286ms\"\nI1007 16:51:04.252948       1 service.go:306] Service services-8936/service-headless-toggled updated: 1 ports\nI1007 16:51:04.253114       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:04.321214       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.303237ms\"\nI1007 16:51:04.321260       1 service.go:421] Adding new service port \"services-8936/service-headless-toggled\" at 100.65.15.87:80/TCP\nI1007 16:51:04.321469       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:04.377789       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.532217ms\"\nI1007 16:51:05.748210       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:05.786246       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.103617ms\"\nI1007 16:51:07.201227       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:07.249980       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.87175ms\"\nI1007 16:51:07.442291       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:07.442340       1 service.go:421] Adding new service port \"services-5652/test-service-2m4cv:http\" at 100.66.133.23:80/TCP\nI1007 16:51:07.442430       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:07.486248       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.899687ms\"\nI1007 16:51:07.878710       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:08.317864       1 service.go:306] Service services-5652/test-service-2m4cv updated: 1 ports\nI1007 16:51:08.317906       1 service.go:423] Updating existing service port \"services-5652/test-service-2m4cv:http\" at 100.66.133.23:80/TCP\nI1007 16:51:08.318057       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:08.356869       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.953253ms\"\nI1007 16:51:08.896918       1 service.go:306] Service services-5652/test-service-2m4cv updated: 0 ports\nI1007 16:51:09.357550       1 service.go:446] Removing service port \"services-5652/test-service-2m4cv:http\"\nI1007 16:51:09.357687       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:09.405074       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.514163ms\"\nI1007 16:51:13.087934       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:13.175572       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"87.729029ms\"\nI1007 16:51:19.094225       1 service.go:306] Service services-5777/nodeport-reuse updated: 1 ports\nI1007 16:51:19.094271       1 service.go:421] Adding new service port \"services-5777/nodeport-reuse\" at 100.67.22.203:80/TCP\nI1007 16:51:19.094572       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:19.196403       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5777/nodeport-reuse\\\" (:30730/tcp4)\"\nI1007 16:51:19.226334       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"130.529732ms\"\nI1007 16:51:19.226878       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:19.239709       1 service.go:306] Service services-5777/nodeport-reuse updated: 0 ports\nI1007 16:51:19.308419       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.035105ms\"\nI1007 16:51:19.772940       1 service.go:306] Service provisioning-3701-4465/csi-hostpathplugin updated: 0 ports\nI1007 16:51:20.309254       1 service.go:446] Removing service port \"services-5777/nodeport-reuse\"\nI1007 16:51:20.309289       1 service.go:446] Removing service port \"provisioning-3701-4465/csi-hostpathplugin:dummy\"\nI1007 16:51:20.309448       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:20.357376       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.116016ms\"\nI1007 16:51:21.502461       1 service.go:306] Service services-1689/affinity-clusterip-timeout updated: 1 ports\nI1007 16:51:21.502542       1 service.go:421] Adding new service port \"services-1689/affinity-clusterip-timeout\" at 100.64.164.159:80/TCP\nI1007 16:51:21.502682       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:21.537005       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.493451ms\"\nI1007 16:51:22.537551       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:22.628351       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.89788ms\"\nI1007 16:51:23.311939       1 service.go:306] Service services-5777/nodeport-reuse updated: 1 ports\nI1007 16:51:23.311996       1 service.go:421] Adding new service port \"services-5777/nodeport-reuse\" at 100.65.164.220:80/TCP\nI1007 16:51:23.312120       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:23.431202       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5777/nodeport-reuse\\\" (:30730/tcp4)\"\nI1007 16:51:23.449095       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"137.097474ms\"\nI1007 16:51:23.460126       1 service.go:306] Service services-5777/nodeport-reuse updated: 0 ports\nI1007 16:51:24.449265       1 service.go:446] Removing service port \"services-5777/nodeport-reuse\"\nI1007 16:51:24.449447       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:24.553958       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"104.688499ms\"\nI1007 16:51:25.707964       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:25.740453       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.594581ms\"\nI1007 16:51:33.869775       1 service.go:306] Service ephemeral-3921-4265/csi-hostpathplugin updated: 1 ports\nI1007 16:51:33.869920       1 service.go:421] Adding new service port \"ephemeral-3921-4265/csi-hostpathplugin:dummy\" at 100.68.219.199:12345/TCP\nI1007 16:51:33.870048       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:33.912327       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.499044ms\"\nI1007 16:51:33.912475       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:33.956040       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.670032ms\"\nI1007 16:51:36.808481       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:36.842827       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.426684ms\"\nI1007 16:51:39.334964       1 service.go:306] Service volume-expand-1517-9173/csi-hostpathplugin updated: 1 ports\nI1007 16:51:39.335009       1 service.go:421] Adding new service port \"volume-expand-1517-9173/csi-hostpathplugin:dummy\" at 100.67.102.208:12345/TCP\nI1007 16:51:39.335183       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:39.417736       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"82.712386ms\"\nI1007 16:51:39.417912       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:39.469712       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.930794ms\"\nI1007 16:51:43.254130       1 service.go:306] Service services-3820/affinity-clusterip updated: 1 ports\nI1007 16:51:43.254177       1 service.go:421] Adding new service port \"services-3820/affinity-clusterip\" at 100.67.30.220:80/TCP\nI1007 16:51:43.254302       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:43.321234       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.052895ms\"\nI1007 16:51:43.321421       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:43.381875       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.589203ms\"\nI1007 16:51:45.781251       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:45.815319       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.154401ms\"\nI1007 16:51:48.672609       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:48.719237       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.711696ms\"\nI1007 16:51:50.404165       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:50.439326       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.247104ms\"\nI1007 16:51:51.178396       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:51.215444       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.215094ms\"\nI1007 16:51:55.775463       1 service.go:306] Service webhook-7857/e2e-test-webhook updated: 1 ports\nI1007 16:51:55.775512       1 service.go:421] Adding new service port \"webhook-7857/e2e-test-webhook\" at 100.64.99.161:8443/TCP\nI1007 16:51:55.775612       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:55.818203       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.686794ms\"\nI1007 16:51:55.818395       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:55.854506       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.25941ms\"\nI1007 16:51:58.106517       1 service.go:306] Service webhook-7857/e2e-test-webhook updated: 0 ports\nI1007 16:51:58.106563       1 service.go:446] Removing service port \"webhook-7857/e2e-test-webhook\"\nI1007 16:51:58.106718       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:58.141696       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.121529ms\"\nI1007 16:51:58.141925       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:51:58.176293       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.55518ms\"\nI1007 16:52:01.821918       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:01.934966       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"113.174296ms\"\nI1007 16:52:02.831960       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:02.977746       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"145.893291ms\"\nI1007 16:52:11.199220       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:11.234123       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.99957ms\"\nI1007 16:52:11.234375       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:11.269661       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.48421ms\"\nI1007 16:52:16.846719       1 service.go:306] Service services-3820/affinity-clusterip updated: 0 ports\nI1007 16:52:16.846756       1 service.go:446] Removing service port \"services-3820/affinity-clusterip\"\nI1007 16:52:16.846867       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:16.903922       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.150626ms\"\nI1007 16:52:16.904115       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:16.941070       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.104914ms\"\nI1007 16:52:24.630202       1 service.go:306] Service services-1689/affinity-clusterip-timeout updated: 0 ports\nI1007 16:52:24.630242       1 service.go:446] Removing service port \"services-1689/affinity-clusterip-timeout\"\nI1007 16:52:24.630351       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:24.676622       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.360566ms\"\nI1007 16:52:24.676770       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:24.736474       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.806284ms\"\nI1007 16:52:30.118584       1 service.go:306] Service provisioning-4145-5462/csi-hostpathplugin updated: 1 ports\nI1007 16:52:30.118632       1 service.go:421] Adding new service port \"provisioning-4145-5462/csi-hostpathplugin:dummy\" at 100.70.34.79:12345/TCP\nI1007 16:52:30.118759       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:30.210686       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"92.043213ms\"\nI1007 16:52:30.210839       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:30.256236       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.503602ms\"\nI1007 16:52:36.701943       1 service.go:306] Service services-7319/multi-endpoint-test updated: 2 ports\nI1007 16:52:36.701998       1 service.go:421] Adding new service port \"services-7319/multi-endpoint-test:portname1\" at 100.70.254.159:80/TCP\nI1007 16:52:36.702017       1 service.go:421] Adding new service port \"services-7319/multi-endpoint-test:portname2\" at 100.70.254.159:81/TCP\nI1007 16:52:36.702149       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:36.749633       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.624111ms\"\nI1007 16:52:36.749789       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:36.798353       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.670579ms\"\nI1007 16:52:37.799367       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:37.832647       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.438655ms\"\nI1007 16:52:38.833126       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:38.887969       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.158112ms\"\nI1007 16:52:45.905044       1 service.go:306] Service ephemeral-3921-4265/csi-hostpathplugin updated: 0 ports\nI1007 16:52:45.905086       1 service.go:446] Removing service port \"ephemeral-3921-4265/csi-hostpathplugin:dummy\"\nI1007 16:52:45.905195       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:45.958510       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.408495ms\"\nI1007 16:52:45.958664       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:46.058825       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"100.267725ms\"\nI1007 16:52:47.059582       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:47.095521       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.053769ms\"\nI1007 16:52:49.747322       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:49.963104       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"215.905038ms\"\nI1007 16:52:50.474980       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:50.571079       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"96.21148ms\"\nI1007 16:52:51.053411       1 service.go:306] Service services-7319/multi-endpoint-test updated: 0 ports\nI1007 16:52:51.053450       1 service.go:446] Removing service port \"services-7319/multi-endpoint-test:portname1\"\nI1007 16:52:51.053464       1 service.go:446] Removing service port \"services-7319/multi-endpoint-test:portname2\"\nI1007 16:52:51.053584       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:51.110916       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.436809ms\"\nI1007 16:52:52.111185       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:52.166516       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.452851ms\"\nI1007 16:52:52.669009       1 service.go:306] Service webhook-7449/e2e-test-webhook updated: 1 ports\nI1007 16:52:53.166919       1 service.go:421] Adding new service port \"webhook-7449/e2e-test-webhook\" at 100.65.168.220:8443/TCP\nI1007 16:52:53.167318       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:53.240619       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.743873ms\"\nI1007 16:52:54.997070       1 service.go:306] Service webhook-7449/e2e-test-webhook updated: 0 ports\nI1007 16:52:54.997236       1 service.go:446] Removing service port \"webhook-7449/e2e-test-webhook\"\nI1007 16:52:54.997512       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:55.040629       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.38303ms\"\nI1007 16:52:55.040857       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:55.077373       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.700991ms\"\nI1007 16:52:58.176066       1 service.go:306] Service aggregator-685/sample-api updated: 1 ports\nI1007 16:52:58.176134       1 service.go:421] Adding new service port \"aggregator-685/sample-api\" at 100.70.96.0:7443/TCP\nI1007 16:52:58.176277       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:58.219670       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.551289ms\"\nI1007 16:52:58.219854       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:52:58.257254       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.541185ms\"\nI1007 16:53:13.281663       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:13.323456       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.89431ms\"\nI1007 16:53:15.720958       1 service.go:306] Service endpointslice-579/example-int-port updated: 1 ports\nI1007 16:53:15.721003       1 service.go:421] Adding new service port \"endpointslice-579/example-int-port:example\" at 100.64.169.186:80/TCP\nI1007 16:53:15.721128       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:15.783815       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.800765ms\"\nI1007 16:53:15.783959       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:15.829899       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.046028ms\"\nI1007 16:53:15.869189       1 service.go:306] Service endpointslice-579/example-named-port updated: 1 ports\nI1007 16:53:16.016383       1 service.go:306] Service endpointslice-579/example-no-match updated: 1 ports\nI1007 16:53:16.830482       1 service.go:421] Adding new service port \"endpointslice-579/example-no-match:example-no-match\" at 100.69.23.123:80/TCP\nI1007 16:53:16.830515       1 service.go:421] Adding new service port \"endpointslice-579/example-named-port:http\" at 100.70.41.205:80/TCP\nI1007 16:53:16.830710       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:16.865620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.175845ms\"\nI1007 16:53:17.865978       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:17.904976       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.203095ms\"\nI1007 16:53:18.906081       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:18.945474       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.522991ms\"\nI1007 16:53:19.023297       1 service.go:306] Service aggregator-685/sample-api updated: 0 ports\nI1007 16:53:19.946298       1 service.go:446] Removing service port \"aggregator-685/sample-api\"\nI1007 16:53:19.946464       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:20.036731       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.434977ms\"\nI1007 16:53:22.725286       1 service.go:306] Service endpointslicemirroring-8956/example-custom-endpoints updated: 1 ports\nI1007 16:53:22.725340       1 service.go:421] Adding new service port \"endpointslicemirroring-8956/example-custom-endpoints:example\" at 100.69.158.4:80/TCP\nI1007 16:53:22.725452       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:22.763274       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.929455ms\"\nI1007 16:53:22.885807       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:22.931971       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.326362ms\"\nI1007 16:53:23.730020       1 service.go:306] Service services-5878/nodeport-update-service updated: 1 ports\nI1007 16:53:23.730407       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service\" at 100.68.121.21:80/TCP\nI1007 16:53:23.730716       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:23.776497       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.08837ms\"\nI1007 16:53:24.024502       1 service.go:306] Service services-5878/nodeport-update-service updated: 1 ports\nI1007 16:53:24.777067       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service:tcp-port\" at 100.68.121.21:80/TCP\nI1007 16:53:24.777118       1 service.go:446] Removing service port \"services-5878/nodeport-update-service\"\nI1007 16:53:24.777253       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:24.827359       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5878/nodeport-update-service:tcp-port\\\" (:30229/tcp4)\"\nI1007 16:53:24.832659       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.594321ms\"\nI1007 16:53:27.370415       1 service.go:306] Service volume-expand-4617-6577/csi-hostpathplugin updated: 1 ports\nI1007 16:53:27.370456       1 service.go:421] Adding new service port \"volume-expand-4617-6577/csi-hostpathplugin:dummy\" at 100.71.172.172:12345/TCP\nI1007 16:53:27.370581       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:27.426040       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.570902ms\"\nI1007 16:53:27.426195       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:27.477072       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.984336ms\"\nI1007 16:53:28.403253       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:28.438466       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.31205ms\"\nI1007 16:53:29.016728       1 service.go:306] Service endpointslicemirroring-8956/example-custom-endpoints updated: 0 ports\nI1007 16:53:29.438760       1 service.go:446] Removing service port \"endpointslicemirroring-8956/example-custom-endpoints:example\"\nI1007 16:53:29.438973       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:29.477115       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.35748ms\"\nI1007 16:53:30.192151       1 service.go:306] Service volume-expand-1517-9173/csi-hostpathplugin updated: 0 ports\nI1007 16:53:30.477881       1 service.go:446] Removing service port \"volume-expand-1517-9173/csi-hostpathplugin:dummy\"\nI1007 16:53:30.478059       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:30.517429       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.536759ms\"\nI1007 16:53:31.233128       1 service.go:306] Service provisioning-4145-5462/csi-hostpathplugin updated: 0 ports\nI1007 16:53:31.494144       1 service.go:306] Service webhook-1328/e2e-test-webhook updated: 1 ports\nI1007 16:53:31.494192       1 service.go:446] Removing service port \"provisioning-4145-5462/csi-hostpathplugin:dummy\"\nI1007 16:53:31.494213       1 service.go:421] Adding new service port \"webhook-1328/e2e-test-webhook\" at 100.64.231.106:8443/TCP\nI1007 16:53:31.494362       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:31.544015       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.80533ms\"\nI1007 16:53:32.544460       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:32.578202       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.859136ms\"\nI1007 16:53:34.089532       1 service.go:306] Service webhook-1328/e2e-test-webhook updated: 0 ports\nI1007 16:53:34.089571       1 service.go:446] Removing service port \"webhook-1328/e2e-test-webhook\"\nI1007 16:53:34.089873       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:34.183392       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"93.809831ms\"\nI1007 16:53:35.183844       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:35.217996       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.26902ms\"\nI1007 16:53:37.454686       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:37.501888       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.389815ms\"\nI1007 16:53:37.599453       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:37.646751       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.387553ms\"\nI1007 16:53:38.459899       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:38.502893       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.06464ms\"\nI1007 16:53:39.503477       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:39.575916       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.872982ms\"\nI1007 16:53:43.590425       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:43.628053       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.71217ms\"\nI1007 16:53:44.411136       1 service.go:306] Service webhook-1193/e2e-test-webhook updated: 1 ports\nI1007 16:53:44.411184       1 service.go:421] Adding new service port \"webhook-1193/e2e-test-webhook\" at 100.64.219.112:8443/TCP\nI1007 16:53:44.411359       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:44.449405       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.215785ms\"\nI1007 16:53:45.393914       1 service.go:306] Service kubectl-7891/agnhost-primary updated: 1 ports\nI1007 16:53:45.393970       1 service.go:421] Adding new service port \"kubectl-7891/agnhost-primary\" at 100.66.139.182:6379/TCP\nI1007 16:53:45.394125       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:45.443819       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.837022ms\"\nI1007 16:53:46.444058       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:46.568954       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"124.970504ms\"\nI1007 16:53:49.317010       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:49.368816       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.893294ms\"\nI1007 16:53:49.658496       1 service.go:306] Service webhook-1193/e2e-test-webhook updated: 0 ports\nI1007 16:53:49.658535       1 service.go:446] Removing service port \"webhook-1193/e2e-test-webhook\"\nI1007 16:53:49.658693       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:49.785019       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"126.469431ms\"\nI1007 16:53:50.786114       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:50.859197       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.179974ms\"\nI1007 16:53:53.379729       1 service.go:306] Service endpointslice-579/example-int-port updated: 0 ports\nI1007 16:53:53.379772       1 service.go:446] Removing service port \"endpointslice-579/example-int-port:example\"\nI1007 16:53:53.379904       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:53.394658       1 service.go:306] Service endpointslice-579/example-named-port updated: 0 ports\nI1007 16:53:53.416647       1 service.go:306] Service endpointslice-579/example-no-match updated: 0 ports\nI1007 16:53:53.450094       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.312174ms\"\nI1007 16:53:53.450125       1 service.go:446] Removing service port \"endpointslice-579/example-named-port:http\"\nI1007 16:53:53.450143       1 service.go:446] Removing service port \"endpointslice-579/example-no-match:example-no-match\"\nI1007 16:53:53.450304       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:53.487042       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.911036ms\"\nI1007 16:53:54.154034       1 service.go:306] Service services-5878/nodeport-update-service updated: 2 ports\nI1007 16:53:54.487225       1 service.go:423] Updating existing service port \"services-5878/nodeport-update-service:tcp-port\" at 100.68.121.21:80/TCP\nI1007 16:53:54.487273       1 service.go:421] Adding new service port \"services-5878/nodeport-update-service:udp-port\" at 100.68.121.21:80/UDP\nI1007 16:53:54.487497       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-5878/nodeport-update-service:udp-port\" clusterIP=\"100.68.121.21\"\nI1007 16:53:54.487577       1 proxier.go:851] Stale udp service NodePort services-5878/nodeport-update-service:udp-port -> 32158\nI1007 16:53:54.487601       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:53:54.550004       1 proxier.go:1292] \"Opened local port\" port=\"\\\"nodePort for services-5878/nodeport-update-service:udp-port\\\" (:32158/udp4)\"\nI1007 16:53:54.585105       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"97.884503ms\"\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-43-90.sa-east-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-47-191.sa-east-1.compute.internal ====\nI1007 16:32:33.874346       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1007 16:32:33.874596       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1007 16:32:33.874629       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1007 16:32:33.874639       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1007 16:32:33.874646       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1007 16:32:33.874653       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1007 16:32:33.874657       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1007 16:32:33.874663       1 flags.go:59] FLAG: --config=\"\"\nI1007 16:32:33.874668       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1007 16:32:33.874677       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1007 16:32:33.874684       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1007 16:32:33.874690       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1007 16:32:33.874695       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1007 16:32:33.874701       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1007 16:32:33.874710       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1007 16:32:33.874718       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1007 16:32:33.874728       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1007 16:32:33.874733       1 flags.go:59] FLAG: --help=\"false\"\nI1007 16:32:33.874738       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-47-191.sa-east-1.compute.internal\"\nI1007 16:32:33.874745       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1007 16:32:33.874750       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1007 16:32:33.874755       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1007 16:32:33.874761       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1007 16:32:33.874778       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1007 16:32:33.874783       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1007 16:32:33.874787       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1007 16:32:33.874792       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1007 16:32:33.874798       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1007 16:32:33.874809       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1007 16:32:33.874816       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1007 16:32:33.874821       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1007 16:32:33.874826       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1007 16:32:33.874833       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1007 16:32:33.874842       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1007 16:32:33.874848       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1007 16:32:33.874862       1 flags.go:59] FLAG: --log-dir=\"\"\nI1007 16:32:33.874867       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1007 16:32:33.874873       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1007 16:32:33.874878       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1007 16:32:33.874883       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1007 16:32:33.874887       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1007 16:32:33.874894       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1007 16:32:33.874899       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-f7af145b3f-58f2d.test-cncf-aws.k8s.io\"\nI1007 16:32:33.874905       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1007 16:32:33.874910       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1007 16:32:33.874915       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1007 16:32:33.874926       1 flags.go:59] FLAG: --one-output=\"false\"\nI1007 16:32:33.874933       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1007 16:32:33.874941       1 flags.go:59] FLAG: --profiling=\"false\"\nI1007 16:32:33.874947       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1007 16:32:33.874957       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1007 16:32:33.874964       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1007 16:32:33.874980       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1007 16:32:33.874985       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1007 16:32:33.874990       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1007 16:32:33.874995       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1007 16:32:33.875001       1 flags.go:59] FLAG: --v=\"2\"\nI1007 16:32:33.875007       1 flags.go:59] FLAG: --version=\"false\"\nI1007 16:32:33.875014       1 flags.go:59] FLAG: --vmodule=\"\"\nI1007 16:32:33.875019       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1007 16:32:33.875027       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1007 16:32:33.875144       1 feature_gate.go:243] feature gates: &{map[]}\nI1007 16:32:33.875283       1 feature_gate.go:243] feature gates: &{map[]}\nI1007 16:32:33.929961       1 node.go:172] Successfully retrieved node IP: 172.20.47.191\nI1007 16:32:33.930002       1 server_others.go:140] Detected node IP 172.20.47.191\nW1007 16:32:33.930030       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI1007 16:32:33.930125       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1007 16:32:33.964215       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1007 16:32:33.964250       1 server_others.go:212] Using iptables Proxier.\nI1007 16:32:33.964264       1 server_others.go:219] creating dualStackProxier for iptables.\nW1007 16:32:33.964401       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1007 16:32:33.964572       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1007 16:32:33.964672       1 proxier.go:276] \"missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI1007 16:32:33.964753       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1007 16:32:33.964790       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1007 16:32:33.964867       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI1007 16:32:33.965014       1 proxier.go:276] \"missing br-netfilter module or unset sysctl br-nf-call-iptables; proxy may not work as intended\"\nI1007 16:32:33.965134       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1007 16:32:33.965220       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1007 16:32:33.965311       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI1007 16:32:33.965646       1 server.go:643] Version: v1.21.5\nI1007 16:32:33.966847       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI1007 16:32:33.966876       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1007 16:32:33.966973       1 mount_linux.go:197] Detected OS without systemd\nI1007 16:32:33.967185       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI1007 16:32:33.975294       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1007 16:32:33.975353       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1007 16:32:33.975542       1 config.go:315] Starting service config controller\nI1007 16:32:33.975562       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1007 16:32:33.975589       1 config.go:224] Starting endpoint slice config controller\nI1007 16:32:33.975598       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nW1007 16:32:33.977373       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:32:33.977738       1 service.go:306] Service default/kubernetes updated: 1 ports\nI1007 16:32:33.977933       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nW1007 16:32:33.978409       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI1007 16:32:34.076211       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1007 16:32:34.076368       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1007 16:32:34.076214       1 shared_informer.go:247] Caches are synced for service config \nI1007 16:32:34.076581       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1007 16:32:34.076628       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1007 16:32:34.076666       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1007 16:32:34.076702       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1007 16:32:34.076601       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1007 16:32:34.076813       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:34.125026       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.458772ms\"\nI1007 16:32:34.125063       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:34.156439       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.370543ms\"\nI1007 16:32:42.165315       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1007 16:32:42.165345       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:42.200857       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.63049ms\"\nI1007 16:32:46.637134       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:46.666743       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"29.655328ms\"\nI1007 16:32:47.443346       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:47.473334       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.058134ms\"\nI1007 16:32:48.473575       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:32:48.509411       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.923944ms\"\nI1007 16:35:24.207477       1 service.go:306] Service services-1343/no-pods updated: 1 ports\nI1007 16:35:24.207536       1 service.go:421] Adding new service port \"services-1343/no-pods\" at 100.69.169.152:80/TCP\nI1007 16:35:24.207569       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:24.246638       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.111946ms\"\nI1007 16:35:24.246815       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:24.281539       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.860585ms\"\nI1007 16:35:30.192975       1 service.go:306] Service provisioning-7539-5141/csi-hostpathplugin updated: 1 ports\nI1007 16:35:30.193031       1 service.go:421] Adding new service port \"provisioning-7539-5141/csi-hostpathplugin:dummy\" at 100.71.122.109:12345/TCP\nI1007 16:35:30.193080       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:30.229440       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.398652ms\"\nI1007 16:35:30.229493       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:30.271187       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.703335ms\"\nI1007 16:35:36.477285       1 service.go:306] Service pods-4148/fooservice updated: 1 ports\nI1007 16:35:36.477330       1 service.go:421] Adding new service port \"pods-4148/fooservice\" at 100.69.97.29:8765/TCP\nI1007 16:35:36.477366       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:36.529704       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.370955ms\"\nI1007 16:35:36.529772       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:36.594700       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.957043ms\"\nI1007 16:35:41.488933       1 service.go:306] Service kubectl-945/agnhost-replica updated: 1 ports\nI1007 16:35:41.488980       1 service.go:421] Adding new service port \"kubectl-945/agnhost-replica\" at 100.68.29.108:6379/TCP\nI1007 16:35:41.489037       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:41.533128       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.12878ms\"\nI1007 16:35:41.533182       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:41.567136       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.970912ms\"\nI1007 16:35:42.271754       1 service.go:306] Service kubectl-945/agnhost-primary updated: 1 ports\nI1007 16:35:42.567290       1 service.go:421] Adding new service port \"kubectl-945/agnhost-primary\" at 100.68.209.227:6379/TCP\nI1007 16:35:42.567385       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:42.601620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.344997ms\"\nI1007 16:35:43.049650       1 service.go:306] Service kubectl-945/frontend updated: 1 ports\nI1007 16:35:43.602506       1 service.go:421] Adding new service port \"kubectl-945/frontend\" at 100.70.223.216:80/TCP\nI1007 16:35:43.602568       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:43.633268       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.779056ms\"\nI1007 16:35:44.477047       1 service.go:306] Service ephemeral-8231-6366/csi-hostpathplugin updated: 1 ports\nI1007 16:35:44.499075       1 service.go:421] Adding new service port \"ephemeral-8231-6366/csi-hostpathplugin:dummy\" at 100.64.143.105:12345/TCP\nI1007 16:35:44.499140       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:44.533555       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.486029ms\"\nI1007 16:35:45.078272       1 service.go:306] Service pods-4148/fooservice updated: 0 ports\nI1007 16:35:45.534187       1 service.go:446] Removing service port \"pods-4148/fooservice\"\nI1007 16:35:45.534284       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:45.573944       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.759636ms\"\nI1007 16:35:47.416649       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:47.451474       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.83995ms\"\nI1007 16:35:48.316170       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:48.348253       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.111923ms\"\nI1007 16:35:48.566241       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:48.588191       1 service.go:306] Service proxy-5208/test-service updated: 1 ports\nI1007 16:35:48.599520       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.308419ms\"\nI1007 16:35:49.177616       1 service.go:306] Service webhook-5849/e2e-test-webhook updated: 1 ports\nI1007 16:35:49.600246       1 service.go:421] Adding new service port \"proxy-5208/test-service\" at 100.69.44.245:80/TCP\nI1007 16:35:49.600280       1 service.go:421] Adding new service port \"webhook-5849/e2e-test-webhook\" at 100.67.80.195:8443/TCP\nI1007 16:35:49.600366       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:49.667454       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.231762ms\"\nI1007 16:35:50.668451       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:50.703370       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.980197ms\"\nI1007 16:35:53.334466       1 service.go:306] Service webhook-5849/e2e-test-webhook updated: 0 ports\nI1007 16:35:53.334506       1 service.go:446] Removing service port \"webhook-5849/e2e-test-webhook\"\nI1007 16:35:53.334545       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:53.366582       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.066983ms\"\nI1007 16:35:53.427985       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:53.462662       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.701523ms\"\nI1007 16:35:54.463454       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:54.504527       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.132122ms\"\nI1007 16:35:55.968771       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:55.980157       1 service.go:306] Service proxy-5208/test-service updated: 0 ports\nI1007 16:35:56.001017       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.279884ms\"\nI1007 16:35:57.001125       1 service.go:446] Removing service port \"proxy-5208/test-service\"\nI1007 16:35:57.001191       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:57.034478       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.357471ms\"\nI1007 16:35:59.037914       1 service.go:306] Service volume-expand-3529-148/csi-hostpathplugin updated: 1 ports\nI1007 16:35:59.037961       1 service.go:421] Adding new service port \"volume-expand-3529-148/csi-hostpathplugin:dummy\" at 100.70.215.250:12345/TCP\nI1007 16:35:59.038003       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:59.069390       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.425469ms\"\nI1007 16:35:59.069574       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:35:59.100405       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"30.980131ms\"\nI1007 16:35:59.509897       1 service.go:306] Service proxy-4616/proxy-service-bc8j8 updated: 4 ports\nI1007 16:36:00.100526       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:portname1\" at 100.67.38.141:80/TCP\nI1007 16:36:00.100554       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:portname2\" at 100.67.38.141:81/TCP\nI1007 16:36:00.100568       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:tlsportname1\" at 100.67.38.141:443/TCP\nI1007 16:36:00.100584       1 service.go:421] Adding new service port \"proxy-4616/proxy-service-bc8j8:tlsportname2\" at 100.67.38.141:444/TCP\nI1007 16:36:00.100626       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:00.155470       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.950543ms\"\nI1007 16:36:05.126407       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:05.161399       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.067641ms\"\nI1007 16:36:07.122462       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:07.186398       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.015048ms\"\nI1007 16:36:10.321020       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:10.369217       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.237428ms\"\nI1007 16:36:11.317748       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:11.357927       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.248171ms\"\nI1007 16:36:11.953021       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:11.986347       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.376406ms\"\nI1007 16:36:13.687034       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:13.719249       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.242426ms\"\nI1007 16:36:19.109226       1 service.go:306] Service ephemeral-9076-5159/csi-hostpathplugin updated: 1 ports\nI1007 16:36:19.109273       1 service.go:421] Adding new service port \"ephemeral-9076-5159/csi-hostpathplugin:dummy\" at 100.66.40.13:12345/TCP\nI1007 16:36:19.109327       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:19.163442       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.155628ms\"\nI1007 16:36:19.163506       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:19.262796       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"99.308789ms\"\nI1007 16:36:19.970034       1 service.go:306] Service proxy-4616/proxy-service-bc8j8 updated: 0 ports\nI1007 16:36:20.263041       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:portname1\"\nI1007 16:36:20.263136       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:portname2\"\nI1007 16:36:20.263147       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:tlsportname1\"\nI1007 16:36:20.263153       1 service.go:446] Removing service port \"proxy-4616/proxy-service-bc8j8:tlsportname2\"\nI1007 16:36:20.263201       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:20.297613       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.578187ms\"\nI1007 16:36:26.523851       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:26.562025       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.209697ms\"\nI1007 16:36:28.877510       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:28.927960       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.504109ms\"\nI1007 16:36:33.351672       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:33.386450       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.812238ms\"\nI1007 16:36:34.328184       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:34.378198       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.055092ms\"\nI1007 16:36:35.343415       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:35.382422       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.050447ms\"\nI1007 16:36:35.415385       1 service.go:306] Service volume-expand-1606-6948/csi-hostpathplugin updated: 1 ports\nI1007 16:36:35.415486       1 service.go:421] Adding new service port \"volume-expand-1606-6948/csi-hostpathplugin:dummy\" at 100.71.176.33:12345/TCP\nI1007 16:36:35.415568       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:35.454361       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.877461ms\"\nI1007 16:36:36.454491       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:36.501106       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.670542ms\"\nI1007 16:36:40.903688       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:40.936878       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.23648ms\"\nI1007 16:36:45.849022       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:45.881975       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.995069ms\"\nI1007 16:36:46.636634       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:46.683560       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.988551ms\"\nI1007 16:36:54.445251       1 service.go:306] Service provisioning-7539-5141/csi-hostpathplugin updated: 0 ports\nI1007 16:36:54.445294       1 service.go:446] Removing service port \"provisioning-7539-5141/csi-hostpathplugin:dummy\"\nI1007 16:36:54.445344       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:54.478876       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.573259ms\"\nI1007 16:36:54.485587       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:54.517982       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.408717ms\"\nI1007 16:36:55.143588       1 service.go:306] Service svc-latency-3095/latency-svc-s6cc6 updated: 1 ports\nI1007 16:36:55.317575       1 service.go:306] Service svc-latency-3095/latency-svc-9kfjx updated: 1 ports\nI1007 16:36:55.328685       1 service.go:306] Service svc-latency-3095/latency-svc-jx2gg updated: 1 ports\nI1007 16:36:55.341811       1 service.go:306] Service svc-latency-3095/latency-svc-l6t5w updated: 1 ports\nI1007 16:36:55.346414       1 service.go:306] Service svc-latency-3095/latency-svc-t2lf5 updated: 1 ports\nI1007 16:36:55.353445       1 service.go:306] Service svc-latency-3095/latency-svc-bq96v updated: 1 ports\nI1007 16:36:55.447697       1 service.go:306] Service svc-latency-3095/latency-svc-ff565 updated: 1 ports\nI1007 16:36:55.447744       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bq96v\" at 100.65.93.66:80/TCP\nI1007 16:36:55.447776       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-ff565\" at 100.66.213.245:80/TCP\nI1007 16:36:55.447788       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-s6cc6\" at 100.65.187.89:80/TCP\nI1007 16:36:55.447799       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9kfjx\" at 100.65.151.141:80/TCP\nI1007 16:36:55.447810       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jx2gg\" at 100.69.102.108:80/TCP\nI1007 16:36:55.447822       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-l6t5w\" at 100.69.252.65:80/TCP\nI1007 16:36:55.447832       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-t2lf5\" at 100.64.94.141:80/TCP\nI1007 16:36:55.447947       1 proxier.go:857] \"Syncing iptables rules\"\nI1007 16:36:55.456647       1 service.go:306] Service svc-latency-3095/latency-svc-nnsdl updated: 1 ports\nI1007 16:36:55.459436       1 service.go:306] Service svc-latency-3095/latency-svc-gbf5h updated: 1 ports\nI1007 16:36:55.474970       1 service.go:306] Service svc-latency-3095/latency-svc-z2zbj updated: 1 ports\nI1007 16:36:55.487525       1 service.go:306] Service svc-latency-3095/latency-svc-dsjks updated: 1 ports\nI1007 16:36:55.494798       1 service.go:306] Service svc-latency-3095/latency-svc-5trpp updated: 1 ports\nI1007 16:36:55.495941       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.19639ms\"\nI1007 16:36:55.514610       1 service.go:306] Service svc-latency-3095/latency-svc-g5qn4 updated: 1 ports\nI1007 16:36:55.535542       1 service.go:306] Service svc-latency-3095/latency-svc-qcstz updated: 1 ports\nI1007 16:36:55.544169       1 service.go:306] Service svc-latency-3095/latency-svc-4rpbj updated: 1 ports\nI1007 16:36:55.555469       1 service.go:306] Service svc-latency-3095/latency-svc-b87kg updated: 1 ports\nI1007 16:36:55.567924       1 service.go:306] Service svc-latency-3095/latency-svc-xkxhn updated: 1 ports\nI1007 16:36:55.575195       1 service.go:306] Service svc-latency-3095/latency-svc-t8hsl updated: 1 ports\nI1007 16:36:55.581673       1 service.go:306] Service svc-latency-3095/latency-svc-bx4kx updated: 1 ports\nI1007 16:36:55.594054       1 service.go:306] Service svc-latency-3095/latency-svc-4cv7j updated: 1 ports\nI1007 16:36:55.597830       1 service.go:306] Service svc-latency-3095/latency-svc-pnlg5 updated: 1 ports\nI1007 16:36:55.606988       1 service.go:306] Service svc-latency-3095/latency-svc-qt6bg updated: 1 ports\nI1007 16:36:55.621278       1 service.go:306] Service svc-latency-3095/latency-svc-98tvv updated: 1 ports\nI1007 16:36:55.627031       1 service.go:306] Service svc-latency-3095/latency-svc-gwngf updated: 1 ports\nI1007 16:36:55.641888       1 service.go:306] Service svc-latency-3095/latency-svc-zvn2p updated: 1 ports\nI1007 16:36:55.653518       1 service.go:306] Service svc-latency-3095/latency-svc-hm4s8 updated: 1 ports\nI1007 16:36:55.675611       1 service.go:306] Service svc-latency-3095/latency-svc-6ldx5 updated: 1 ports\nI1007 16:36:55.683195       1 service.go:306] Service svc-latency-3095/latency-svc-q68gs updated: 1 ports\nI1007 16:36:55.697189       1 service.go:306] Service svc-latency-3095/latency-svc-w6pjg updated: 1 ports\nI1007 16:36:55.706682       1 service.go:306] Service svc-latency-3095/latency-svc-rddgn updated: 1 ports\nI1007 16:36:55.747809       1 service.go:306] Service svc-latency-3095/latency-svc-ctvfm updated: 1 ports\nI1007 16:36:55.784480       1 service.go:306] Service svc-latency-3095/latency-svc-c44lg updated: 1 ports\nI1007 16:36:55.817839       1 service.go:306] Service svc-latency-3095/latency-svc-qpktt updated: 1 ports\nI1007 16:36:55.831638       1 service.go:306] Service svc-latency-3095/latency-svc-fljkf updated: 1 ports\nI1007 16:36:55.860470       1 service.go:306] Service svc-latency-3095/latency-svc-mdv9v updated: 1 ports\nI1007 16:36:55.883343       1 service.go:306] Service svc-latency-3095/latency-svc-8kddc updated: 1 ports\nI1007 16:36:55.907084       1 service.go:306] Service svc-latency-3095/latency-svc-d5zn8 updated: 1 ports\nI1007 16:36:55.914270       1 service.go:306] Service svc-latency-3095/latency-svc-9kf42 updated: 1 ports\nI1007 16:36:55.919845       1 service.go:306] Service svc-latency-3095/latency-svc-rmbrm updated: 1 ports\nI1007 16:36:55.932181       1 service.go:306] Service svc-latency-3095/latency-svc-87x29 updated: 1 ports\nI1007 16:36:55.953793       1 service.go:306] Service svc-latency-3095/latency-svc-kf28m updated: 1 ports\nI1007 16:36:55.964576       1 service.go:306] Service svc-latency-3095/latency-svc-4fhml updated: 1 ports\nI1007 16:36:55.971010       1 service.go:306] Service svc-latency-3095/latency-svc-jwjg7 updated: 1 ports\nI1007 16:36:55.983946       1 service.go:306] Service svc-latency-3095/latency-svc-r44kh updated: 1 ports\nI1007 16:36:55.994055       1 service.go:306] Service svc-latency-3095/latency-svc-8vhnw updated: 1 ports\nI1007 16:36:56.000732       1 service.go:306] Service svc-latency-3095/latency-svc-bbvgj updated: 1 ports\nI1007 16:36:56.009204       1 service.go:306] Service svc-latency-3095/latency-svc-pgdl8 updated: 1 ports\nI1007 16:36:56.010622       1 service.go:306] Service svc-latency-3095/latency-svc-2hlbc updated: 1 ports\nI1007 16:36:56.025646       1 service.go:306] Service svc-latency-3095/latency-svc-mlr6d updated: 1 ports\nI1007 16:36:56.041645       1 service.go:306] Service svc-latency-3095/latency-svc-mxnnz updated: 1 ports\nI1007 16:36:56.051359       1 service.go:306] Service svc-latency-3095/latency-svc-4t54c updated: 1 ports\nI1007 16:36:56.061393       1 service.go:306] Service svc-latency-3095/latency-svc-7bkhm updated: 1 ports\nI1007 16:36:56.071052       1 service.go:306] Service svc-latency-3095/latency-svc-kkldd updated: 1 ports\nI1007 16:36:56.082138       1 service.go:306] Service svc-latency-3095/latency-svc-9442f updated: 1 ports\nI1007 16:36:56.094323       1 service.go:306] Service svc-latency-3095/latency-svc-fzzx6 updated: 1 ports\nI1007 16:36:56.104774       1 service.go:306] Service svc-latency-3095/latency-svc-fl9bx updated: 1 ports\nI1007 16:36:56.143486       1 service.go:306] Service svc-latency-3095/latency-svc-fdk5t updated: 1 ports\nI1007 16:36:56.148193       1 service.go:306] Service svc-latency-3095/latency-svc-gpfnm updated: 1 ports\nI1007 16:36:56.151394       1 service.go:306] Service svc-latency-3095/latency-svc-lr28t updated: 1 ports\nI1007 16:36:56.180380       1 service.go:306] Service svc-latency-3095/latency-svc-wn7m9 updated: 1 ports\nI1007 16:36:56.231768       1 service.go:306] Service svc-latency-3095/latency-svc-kpg48 updated: 1 ports\nI1007 16:36:56.289868       1 service.go:306] Service svc-latency-3095/latency-svc-b6dh9 updated: 1 ports\nI1007 16:36:56.321926       1 service.go:306] Service svc-latency-3095/latency-svc-7qzx2 updated: 1 ports\nI1007 16:36:56.381493       1 service.go:306] Service svc-latency-3095/latency-svc-4vvjq updated: 1 ports\nI1007 16:36:56.425189       1 service.go:306] Service svc-latency-3095/latency-svc-7dbpk updated: 1 ports\nI1007 16:36:56.477656       1 service.go:306] Service svc-latency-3095/latency-svc-lgr2h updated: 1 ports\nI1007 16:36:56.477738       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rmbrm\" at 100.67.179.134:80/TCP\nI1007 16:36:56.477763       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-wn7m9\" at 100.64.21.191:80/TCP\nI1007 16:36:56.477775       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-z2zbj\" at 100.65.72.93:80/TCP\nI1007 16:36:56.477786       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-7qzx2\" at 100.70.249.235:80/TCP\nI1007 16:36:56.477798       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-qcstz\" at 100.66.76.0:80/TCP\nI1007 16:36:56.477818       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-pnlg5\" at 100.71.91.21:80/TCP\nI1007 16:36:56.477835       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-mlr6d\" at 100.67.80.45:80/TCP\nI1007 16:36:56.477850       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-gbf5h\" at 100.67.134.122:80/TCP\nI1007 16:36:56.477870       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-jwjg7\" at 100.64.161.15:80/TCP\nI1007 16:36:56.477886       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b87kg\" at 100.68.28.8:80/TCP\nI1007 16:36:56.477903       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-t8hsl\" at 100.69.244.232:80/TCP\nI1007 16:36:56.477922       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-8kddc\" at 100.69.153.63:80/TCP\nI1007 16:36:56.477933       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-kf28m\" at 100.70.154.217:80/TCP\nI1007 16:36:56.477944       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-9442f\" at 100.68.12.230:80/TCP\nI1007 16:36:56.477956       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-b6dh9\" at 100.71.251.62:80/TCP\nI1007 16:36:56.477966       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-dsjks\" at 100.67.233.63:80/TCP\nI1007 16:36:56.477978       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-rddgn\" at 100.67.77.164:80/TCP\nI1007 16:36:56.477993       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-fdk5t\" at 100.65.14.48:80/TCP\nI1007 16:36:56.478008       1 service.go:421] Adding new service port \"svc-latency-3095/latency-svc-bx4kx\" at 100.67.65.65:80/TCP\nI1007 16:36:56.478018       1 service.go:421] Adding new s