This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-10 15:36
Elapsed34m50s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 128 lines ...
I1010 15:36:51.948721    4729 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I1010 15:36:51.950381    4729 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-404-g18f8b0149c/linux/amd64/kops
I1010 15:36:53.125690    4729 up.go:43] Cleaning up any leaked resources from previous cluster
I1010 15:36:53.125731    4729 dumplogs.go:40] /logs/artifacts/b1e53313-29df-11ec-b781-c649eef4635a/kops toolbox dump --name e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1010 15:36:53.144699    4750 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1010 15:36:53.144810    4750 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io" not found
W1010 15:36:53.623473    4729 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1010 15:36:53.623543    4729 down.go:48] /logs/artifacts/b1e53313-29df-11ec-b781-c649eef4635a/kops delete cluster --name e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --yes
I1010 15:36:53.644446    4761 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1010 15:36:53.645078    4761 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io" not found
I1010 15:36:54.102934    4729 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/10 15:36:54 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1010 15:36:54.204527    4729 http.go:37] curl https://ip.jsb.workers.dev
I1010 15:36:54.336390    4729 up.go:144] /logs/artifacts/b1e53313-29df-11ec-b781-c649eef4635a/kops create cluster --name e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 35.184.205.92/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones sa-east-1a --master-size c5.large
I1010 15:36:54.354968    4771 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1010 15:36:54.355077    4771 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1010 15:36:54.380207    4771 create_cluster.go:838] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1010 15:36:54.911266    4771 new_cluster.go:1077]  Cloud Provider ID = aws
... skipping 31 lines ...

I1010 15:37:22.960062    4729 up.go:181] /logs/artifacts/b1e53313-29df-11ec-b781-c649eef4635a/kops validate cluster --name e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1010 15:37:22.981997    4791 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1010 15:37:22.983143    4791 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io

W1010 15:37:24.388093    4791 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:37:34.420625    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:37:44.451682    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:37:54.480983    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:04.514715    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:14.541051    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:24.605435    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:34.641437    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:44.689476    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:38:54.725005    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:04.756120    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:14.794454    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:24.824792    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:34.870118    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:44.922075    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:39:54.951407    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:04.983325    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:15.012725    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:25.042835    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:35.080807    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:45.113918    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1010 15:40:55.147601    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 10 lines ...
Pod	kube-system/cilium-4rz6t			system-node-critical pod "cilium-4rz6t" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-kbwrz		system-cluster-critical pod "coredns-5dc785954d-kbwrz" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-qjcr4	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-qjcr4" is pending
Pod	kube-system/ebs-csi-controller-698f4bd686-kd24b	system-cluster-critical pod "ebs-csi-controller-698f4bd686-kd24b" is pending
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending

Validation Failed
W1010 15:41:08.732723    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 14 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-qjcr4	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-qjcr4" is pending
Pod	kube-system/ebs-csi-controller-698f4bd686-kd24b	system-cluster-critical pod "ebs-csi-controller-698f4bd686-kd24b" is pending
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending

Validation Failed
W1010 15:41:21.257132    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 22 lines ...
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-hnbbm			system-node-critical pod "ebs-csi-node-hnbbm" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending
Pod	kube-system/ebs-csi-node-n98sn			system-node-critical pod "ebs-csi-node-n98sn" is pending

Validation Failed
W1010 15:41:33.728376    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-hnbbm			system-node-critical pod "ebs-csi-node-hnbbm" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending
Pod	kube-system/ebs-csi-node-n98sn			system-node-critical pod "ebs-csi-node-n98sn" is pending

Validation Failed
W1010 15:41:46.194159    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 15 lines ...
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-hnbbm			system-node-critical pod "ebs-csi-node-hnbbm" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending
Pod	kube-system/ebs-csi-node-n98sn			system-node-critical pod "ebs-csi-node-n98sn" is pending

Validation Failed
W1010 15:41:58.701106    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 13 lines ...
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-hnbbm			system-node-critical pod "ebs-csi-node-hnbbm" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending
Pod	kube-system/ebs-csi-node-n98sn			system-node-critical pod "ebs-csi-node-n98sn" is pending

Validation Failed
W1010 15:42:11.241753    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 11 lines ...
Pod	kube-system/ebs-csi-node-5gs48			system-node-critical pod "ebs-csi-node-5gs48" is pending
Pod	kube-system/ebs-csi-node-f5qpd			system-node-critical pod "ebs-csi-node-f5qpd" is pending
Pod	kube-system/ebs-csi-node-hnbbm			system-node-critical pod "ebs-csi-node-hnbbm" is pending
Pod	kube-system/ebs-csi-node-j7zbn			system-node-critical pod "ebs-csi-node-j7zbn" is pending
Pod	kube-system/ebs-csi-node-n98sn			system-node-critical pod "ebs-csi-node-n98sn" is pending

Validation Failed
W1010 15:42:23.625386    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 6 lines ...
ip-172-20-61-156.sa-east-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/ebs-csi-controller-698f4bd686-kd24b	system-cluster-critical pod "ebs-csi-controller-698f4bd686-kd24b" is pending

Validation Failed
W1010 15:42:36.016487    4791 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-sa-east-1a	Master	c5.large	1	1	sa-east-1a
nodes-sa-east-1a	Node	t3.medium	4	4	sa-east-1a

... skipping 1339 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:45:15.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7141" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:16.346: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:45:17.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5900" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:18.049: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:45:18.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8555" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:18.946: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 65 lines ...
• [SLOW TEST:8.013 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:22.715: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
Oct 10 15:45:21.019: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 10 15:45:21.019: INFO: stdout: "scheduler controller-manager etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Oct 10 15:45:21.019: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9389 get componentstatuses scheduler'
Oct 10 15:45:21.525: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 10 15:45:21.525: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Oct 10 15:45:21.525: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9389 get componentstatuses controller-manager'
Oct 10 15:45:22.025: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 10 15:45:22.025: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-0
Oct 10 15:45:22.025: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9389 get componentstatuses etcd-0'
Oct 10 15:45:22.542: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 10 15:45:22.542: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of etcd-1
Oct 10 15:45:22.542: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9389 get componentstatuses etcd-1'
Oct 10 15:45:23.045: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 10 15:45:23.045: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:45:23.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9389" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":2,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 57 lines ...
Oct 10 15:45:15.354: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct 10 15:45:15.789: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2108" to be "Succeeded or Failed"
Oct 10 15:45:15.947: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 158.590067ms
Oct 10 15:45:18.098: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30863085s
Oct 10 15:45:20.242: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453007951s
Oct 10 15:45:22.386: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597090041s
Oct 10 15:45:24.530: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.741025174s
Oct 10 15:45:24.530: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:45:24.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2108" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:25.160: INFO: Only supported for providers [openstack] (not aws)
... skipping 46 lines ...
Oct 10 15:45:25.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Oct 10 15:45:26.096: INFO: Waiting up to 5m0s for pod "client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274" in namespace "containers-3008" to be "Succeeded or Failed"
Oct 10 15:45:26.240: INFO: Pod "client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274": Phase="Pending", Reason="", readiness=false. Elapsed: 144.15846ms
Oct 10 15:45:28.389: INFO: Pod "client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293449594s
Oct 10 15:45:30.534: INFO: Pod "client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.437824232s
STEP: Saw pod success
Oct 10 15:45:30.534: INFO: Pod "client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274" satisfied condition "Succeeded or Failed"
Oct 10 15:45:30.680: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274 container agnhost-container: <nil>
STEP: delete the pod
Oct 10 15:45:31.127: INFO: Waiting for pod client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274 to disappear
Oct 10 15:45:31.271: INFO: Pod client-containers-fac77f5b-e3ce-4962-bc6b-3775630ed274 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.362 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:31.570: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
W1010 15:45:15.356864    5531 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 10 15:45:15.356: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 10 15:45:15.789: INFO: Waiting up to 5m0s for pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb" in namespace "security-context-8158" to be "Succeeded or Failed"
Oct 10 15:45:15.948: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 159.220362ms
Oct 10 15:45:18.097: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308675412s
Oct 10 15:45:20.241: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452046327s
Oct 10 15:45:22.385: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596294318s
Oct 10 15:45:24.529: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.740361412s
Oct 10 15:45:26.672: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883686896s
Oct 10 15:45:28.817: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.028031468s
Oct 10 15:45:30.961: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.171937263s
STEP: Saw pod success
Oct 10 15:45:30.961: INFO: Pod "security-context-43a66d72-9a37-446a-942f-14d684ec68eb" satisfied condition "Succeeded or Failed"
Oct 10 15:45:31.104: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod security-context-43a66d72-9a37-446a-942f-14d684ec68eb container test-container: <nil>
STEP: delete the pod
Oct 10 15:45:31.405: INFO: Waiting for pod security-context-43a66d72-9a37-446a-942f-14d684ec68eb to disappear
Oct 10 15:45:31.548: INFO: Pod security-context-43a66d72-9a37-446a-942f-14d684ec68eb no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.208 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:31.996: INFO: Only supported for providers [azure] (not aws)
... skipping 93 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct 10 15:45:15.651: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 10 15:45:15.651: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8blr
STEP: Creating a pod to test subpath
Oct 10 15:45:15.807: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8blr" in namespace "provisioning-3579" to be "Succeeded or Failed"
Oct 10 15:45:15.960: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 152.85756ms
Oct 10 15:45:18.108: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301423273s
Oct 10 15:45:20.252: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445370019s
Oct 10 15:45:22.397: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589751115s
Oct 10 15:45:24.541: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734350339s
Oct 10 15:45:26.687: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.879933275s
Oct 10 15:45:28.835: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.02804417s
Oct 10 15:45:30.980: INFO: Pod "pod-subpath-test-inlinevolume-8blr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.173378912s
STEP: Saw pod success
Oct 10 15:45:30.980: INFO: Pod "pod-subpath-test-inlinevolume-8blr" satisfied condition "Succeeded or Failed"
Oct 10 15:45:31.126: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-8blr container test-container-volume-inlinevolume-8blr: <nil>
STEP: delete the pod
Oct 10 15:45:31.428: INFO: Waiting for pod pod-subpath-test-inlinevolume-8blr to disappear
Oct 10 15:45:31.571: INFO: Pod pod-subpath-test-inlinevolume-8blr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8blr
Oct 10 15:45:31.571: INFO: Deleting pod "pod-subpath-test-inlinevolume-8blr" in namespace "provisioning-3579"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:32.303: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:35.549: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
• [SLOW TEST:12.908 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:36.728: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
• [SLOW TEST:23.759 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:38.620: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
• [SLOW TEST:23.844 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:38.650: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 29 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:24.602 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:121
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:25.058 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:150
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:39.905: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run without a specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:41.466: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-7400/configmap-test-2e81b16c-ef9f-420e-82dd-264a2e2414ca
STEP: Creating a pod to test consume configMaps
Oct 10 15:45:37.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791" in namespace "configmap-7400" to be "Succeeded or Failed"
Oct 10 15:45:37.956: INFO: Pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791": Phase="Pending", Reason="", readiness=false. Elapsed: 203.130288ms
Oct 10 15:45:40.102: INFO: Pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349067411s
Oct 10 15:45:42.246: INFO: Pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492881932s
Oct 10 15:45:44.391: INFO: Pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.637573104s
STEP: Saw pod success
Oct 10 15:45:44.391: INFO: Pod "pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791" satisfied condition "Succeeded or Failed"
Oct 10 15:45:44.534: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791 container env-test: <nil>
STEP: delete the pod
Oct 10 15:45:44.857: INFO: Waiting for pod pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791 to disappear
Oct 10 15:45:45.001: INFO: Pod pod-configmaps-e6d02575-a0a5-46bd-9eee-d5e93025b791 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
Oct 10 15:45:30.120: INFO: PersistentVolumeClaim pvc-vdb77 found but phase is Pending instead of Bound.
Oct 10 15:45:32.276: INFO: PersistentVolumeClaim pvc-vdb77 found and phase=Bound (2.299258377s)
Oct 10 15:45:32.276: INFO: Waiting up to 3m0s for PersistentVolume local-nhxtc to have phase Bound
Oct 10 15:45:32.419: INFO: PersistentVolume local-nhxtc found and phase=Bound (143.047346ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-szxr
STEP: Creating a pod to test subpath
Oct 10 15:45:32.852: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-szxr" in namespace "provisioning-7234" to be "Succeeded or Failed"
Oct 10 15:45:32.995: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr": Phase="Pending", Reason="", readiness=false. Elapsed: 143.488394ms
Oct 10 15:45:35.139: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286761086s
Oct 10 15:45:37.283: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431002435s
Oct 10 15:45:39.427: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57485797s
Oct 10 15:45:41.570: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.718263846s
STEP: Saw pod success
Oct 10 15:45:41.570: INFO: Pod "pod-subpath-test-preprovisionedpv-szxr" satisfied condition "Succeeded or Failed"
Oct 10 15:45:41.719: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-szxr container test-container-subpath-preprovisionedpv-szxr: <nil>
STEP: delete the pod
Oct 10 15:45:42.018: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-szxr to disappear
Oct 10 15:45:42.161: INFO: Pod pod-subpath-test-preprovisionedpv-szxr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-szxr
Oct 10 15:45:42.161: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-szxr" in namespace "provisioning-7234"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
• [SLOW TEST:33.847 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:48.595: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 429 lines ...
• [SLOW TEST:38.124 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:39.491 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:43.493 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:45:58.263: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 127 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:45:50.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:45:51.608: INFO: Waiting up to 5m0s for pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634" in namespace "projected-3663" to be "Succeeded or Failed"
Oct 10 15:45:51.751: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634": Phase="Pending", Reason="", readiness=false. Elapsed: 143.4295ms
Oct 10 15:45:53.896: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288221132s
Oct 10 15:45:56.050: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442144898s
Oct 10 15:45:58.196: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587833254s
Oct 10 15:46:00.340: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.731842367s
STEP: Saw pod success
Oct 10 15:46:00.340: INFO: Pod "metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634" satisfied condition "Succeeded or Failed"
Oct 10 15:46:00.483: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634 container client-container: <nil>
STEP: delete the pod
Oct 10 15:46:00.786: INFO: Waiting for pod metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634 to disappear
Oct 10 15:46:00.928: INFO: Pod metadata-volume-ae163644-ead1-44b1-9e45-b9683852b634 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.480 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:01.230: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-b8953e32-a451-4494-b082-85422d7d3771
STEP: Creating a pod to test consume secrets
Oct 10 15:45:53.956: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871" in namespace "projected-1019" to be "Succeeded or Failed"
Oct 10 15:45:54.100: INFO: Pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871": Phase="Pending", Reason="", readiness=false. Elapsed: 143.344118ms
Oct 10 15:45:56.247: INFO: Pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289979891s
Oct 10 15:45:58.390: INFO: Pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433692544s
Oct 10 15:46:00.535: INFO: Pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577954446s
STEP: Saw pod success
Oct 10 15:46:00.535: INFO: Pod "pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871" satisfied condition "Succeeded or Failed"
Oct 10 15:46:00.678: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 10 15:46:00.989: INFO: Waiting for pod pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871 to disappear
Oct 10 15:46:01.132: INFO: Pod pod-projected-secrets-7c328f01-e6f2-4938-9058-6074c9ad8871 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.497 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:45:54.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-5d04282c-1d5c-4e39-95c4-fbb52403de8f
STEP: Creating a pod to test consume configMaps
Oct 10 15:45:55.348: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949" in namespace "projected-2288" to be "Succeeded or Failed"
Oct 10 15:45:55.510: INFO: Pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949": Phase="Pending", Reason="", readiness=false. Elapsed: 161.792779ms
Oct 10 15:45:57.660: INFO: Pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31161618s
Oct 10 15:45:59.804: INFO: Pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455880993s
Oct 10 15:46:01.950: INFO: Pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.601361553s
STEP: Saw pod success
Oct 10 15:46:01.950: INFO: Pod "pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949" satisfied condition "Succeeded or Failed"
Oct 10 15:46:02.093: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949 container agnhost-container: <nil>
STEP: delete the pod
Oct 10 15:46:02.416: INFO: Waiting for pod pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949 to disappear
Oct 10 15:46:02.560: INFO: Pod pod-projected-configmaps-a04770c3-f0ee-4b5c-b2d4-f475c3404949 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.510 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:02.861: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:03.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-5294" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:49.399 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":1,"skipped":52,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Oct 10 15:45:44.265: INFO: PersistentVolumeClaim pvc-wprcj found but phase is Pending instead of Bound.
Oct 10 15:45:46.413: INFO: PersistentVolumeClaim pvc-wprcj found and phase=Bound (2.290178315s)
Oct 10 15:45:46.413: INFO: Waiting up to 3m0s for PersistentVolume local-dcgbn to have phase Bound
Oct 10 15:45:46.559: INFO: PersistentVolume local-dcgbn found and phase=Bound (145.942822ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cfcd
STEP: Creating a pod to test subpath
Oct 10 15:45:46.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cfcd" in namespace "provisioning-2115" to be "Succeeded or Failed"
Oct 10 15:45:47.136: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 143.145415ms
Oct 10 15:45:49.282: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289363849s
Oct 10 15:45:51.425: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432432957s
Oct 10 15:45:53.568: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575644516s
Oct 10 15:45:55.712: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.7195847s
STEP: Saw pod success
Oct 10 15:45:55.712: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd" satisfied condition "Succeeded or Failed"
Oct 10 15:45:55.856: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-cfcd container test-container-subpath-preprovisionedpv-cfcd: <nil>
STEP: delete the pod
Oct 10 15:45:56.178: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cfcd to disappear
Oct 10 15:45:56.321: INFO: Pod pod-subpath-test-preprovisionedpv-cfcd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cfcd
Oct 10 15:45:56.321: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cfcd" in namespace "provisioning-2115"
STEP: Creating pod pod-subpath-test-preprovisionedpv-cfcd
STEP: Creating a pod to test subpath
Oct 10 15:45:56.608: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cfcd" in namespace "provisioning-2115" to be "Succeeded or Failed"
Oct 10 15:45:56.751: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 142.644614ms
Oct 10 15:45:58.895: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286745218s
Oct 10 15:46:01.038: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.42995757s
STEP: Saw pod success
Oct 10 15:46:01.038: INFO: Pod "pod-subpath-test-preprovisionedpv-cfcd" satisfied condition "Succeeded or Failed"
Oct 10 15:46:01.181: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-cfcd container test-container-subpath-preprovisionedpv-cfcd: <nil>
STEP: delete the pod
Oct 10 15:46:01.489: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cfcd to disappear
Oct 10 15:46:01.632: INFO: Pod pod-subpath-test-preprovisionedpv-cfcd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cfcd
Oct 10 15:46:01.632: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cfcd" in namespace "provisioning-2115"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:04.704: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:04.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4479" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:04.826: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W1010 15:45:15.327738    5385 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 10 15:45:15.327: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 10 15:45:15.923: INFO: created pod
Oct 10 15:45:15.923: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1913" to be "Succeeded or Failed"
Oct 10 15:45:16.069: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 146.449982ms
Oct 10 15:45:18.215: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292568495s
Oct 10 15:45:20.360: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.436987739s
Oct 10 15:45:22.507: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584185664s
Oct 10 15:45:24.652: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729060121s
Oct 10 15:45:26.797: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.87476311s
Oct 10 15:45:28.943: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.020148561s
Oct 10 15:45:31.088: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.165874242s
Oct 10 15:45:33.234: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 17.310976626s
Oct 10 15:45:35.379: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.456395146s
STEP: Saw pod success
Oct 10 15:45:35.379: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Oct 10 15:46:05.380: INFO: polling logs
Oct 10 15:46:05.542: INFO: Pod logs: 
2021/10/10 15:45:33 OK: Got token
2021/10/10 15:45:33 validating with in-cluster discovery
2021/10/10 15:45:33 OK: got issuer https://api.internal.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io
2021/10/10 15:45:33 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:51.375 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:06.161: INFO: Only supported for providers [openstack] (not aws)
... skipping 55 lines ...
Oct 10 15:45:17.754: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-28259hv8
STEP: creating a claim
Oct 10 15:45:17.926: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-6grb
STEP: Creating a pod to test exec-volume-test
Oct 10 15:45:18.380: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-6grb" in namespace "volume-282" to be "Succeeded or Failed"
Oct 10 15:45:18.524: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 143.365712ms
Oct 10 15:45:20.668: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287778388s
Oct 10 15:45:22.812: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432192271s
Oct 10 15:45:24.984: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6040509s
Oct 10 15:45:27.129: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749081383s
Oct 10 15:45:29.273: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.892808309s
... skipping 8 lines ...
Oct 10 15:45:48.574: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.193560229s
Oct 10 15:45:50.719: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.338488694s
Oct 10 15:45:52.863: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.482496256s
Oct 10 15:45:55.008: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.62750713s
Oct 10 15:45:57.152: INFO: Pod "exec-volume-test-dynamicpv-6grb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.771889553s
STEP: Saw pod success
Oct 10 15:45:57.152: INFO: Pod "exec-volume-test-dynamicpv-6grb" satisfied condition "Succeeded or Failed"
Oct 10 15:45:57.295: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod exec-volume-test-dynamicpv-6grb container exec-container-dynamicpv-6grb: <nil>
STEP: delete the pod
Oct 10 15:45:57.591: INFO: Waiting for pod exec-volume-test-dynamicpv-6grb to disappear
Oct 10 15:45:57.736: INFO: Pod exec-volume-test-dynamicpv-6grb no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-6grb
Oct 10 15:45:57.736: INFO: Deleting pod "exec-volume-test-dynamicpv-6grb" in namespace "volume-282"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Oct 10 15:46:02.022: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 10 15:46:02.166: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-96tk
STEP: Creating a pod to test subpath
Oct 10 15:46:02.313: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-96tk" in namespace "provisioning-6495" to be "Succeeded or Failed"
Oct 10 15:46:02.456: INFO: Pod "pod-subpath-test-inlinevolume-96tk": Phase="Pending", Reason="", readiness=false. Elapsed: 143.068973ms
Oct 10 15:46:04.603: INFO: Pod "pod-subpath-test-inlinevolume-96tk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289530829s
Oct 10 15:46:06.747: INFO: Pod "pod-subpath-test-inlinevolume-96tk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434264103s
Oct 10 15:46:08.891: INFO: Pod "pod-subpath-test-inlinevolume-96tk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57825628s
STEP: Saw pod success
Oct 10 15:46:08.891: INFO: Pod "pod-subpath-test-inlinevolume-96tk" satisfied condition "Succeeded or Failed"
Oct 10 15:46:09.035: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-96tk container test-container-subpath-inlinevolume-96tk: <nil>
STEP: delete the pod
Oct 10 15:46:09.331: INFO: Waiting for pod pod-subpath-test-inlinevolume-96tk to disappear
Oct 10 15:46:09.476: INFO: Pod pod-subpath-test-inlinevolume-96tk no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-96tk
Oct 10 15:46:09.476: INFO: Deleting pod "pod-subpath-test-inlinevolume-96tk" in namespace "provisioning-6495"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":4,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:10.089: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 113 lines ...
Oct 10 15:45:58.879: INFO: PersistentVolumeClaim pvc-mb7dz found but phase is Pending instead of Bound.
Oct 10 15:46:01.025: INFO: PersistentVolumeClaim pvc-mb7dz found and phase=Bound (6.577290137s)
Oct 10 15:46:01.025: INFO: Waiting up to 3m0s for PersistentVolume local-pnpwl to have phase Bound
Oct 10 15:46:01.168: INFO: PersistentVolume local-pnpwl found and phase=Bound (143.134031ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vnzc
STEP: Creating a pod to test subpath
Oct 10 15:46:01.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vnzc" in namespace "provisioning-8171" to be "Succeeded or Failed"
Oct 10 15:46:01.744: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.852904ms
Oct 10 15:46:03.888: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287842063s
Oct 10 15:46:06.033: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433077912s
Oct 10 15:46:08.178: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57758721s
Oct 10 15:46:10.323: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722426002s
STEP: Saw pod success
Oct 10 15:46:10.323: INFO: Pod "pod-subpath-test-preprovisionedpv-vnzc" satisfied condition "Succeeded or Failed"
Oct 10 15:46:10.467: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-vnzc container test-container-volume-preprovisionedpv-vnzc: <nil>
STEP: delete the pod
Oct 10 15:46:10.771: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vnzc to disappear
Oct 10 15:46:10.914: INFO: Pod pod-subpath-test-preprovisionedpv-vnzc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vnzc
Oct 10 15:46:10.915: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vnzc" in namespace "provisioning-8171"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:13.253: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:13.336: INFO: Only supported for providers [gce gke] (not aws)
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Oct 10 15:46:05.321: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9771" to be "Succeeded or Failed"
Oct 10 15:46:05.464: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.830685ms
Oct 10 15:46:07.609: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287902342s
Oct 10 15:46:09.753: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432564206s
Oct 10 15:46:11.899: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577923027s
Oct 10 15:46:14.043: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.722529578s
STEP: Saw pod success
Oct 10 15:46:14.043: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct 10 15:46:14.191: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Oct 10 15:46:14.499: INFO: Waiting for pod pod-host-path-test to disappear
Oct 10 15:46:14.643: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.479 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:14.954: INFO: Only supported for providers [azure] (not aws)
... skipping 23 lines ...
Oct 10 15:46:04.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 10 15:46:05.718: INFO: Waiting up to 5m0s for pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff" in namespace "emptydir-2647" to be "Succeeded or Failed"
Oct 10 15:46:05.861: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 143.307459ms
Oct 10 15:46:08.007: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288511009s
Oct 10 15:46:10.151: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433049128s
Oct 10 15:46:12.296: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577824018s
Oct 10 15:46:14.440: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.7224558s
STEP: Saw pod success
Oct 10 15:46:14.441: INFO: Pod "pod-c0df7819-e195-46f9-b39a-3f838507d4ff" satisfied condition "Succeeded or Failed"
Oct 10 15:46:14.584: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-c0df7819-e195-46f9-b39a-3f838507d4ff container test-container: <nil>
STEP: delete the pod
Oct 10 15:46:14.879: INFO: Waiting for pod pod-c0df7819-e195-46f9-b39a-3f838507d4ff to disappear
Oct 10 15:46:15.022: INFO: Pod pod-c0df7819-e195-46f9-b39a-3f838507d4ff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.470 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:15.357: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:16.335: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:18.837: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:19.421: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:45:45.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 175 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:22.101: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
STEP: Wait for the deployment to be ready
Oct 10 15:46:15.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 10 15:46:17.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769477575, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct 10 15:46:20.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:21.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7246" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:9.738 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":3,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:23.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-74" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:23.484: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:26.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3077" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:26.667: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Pod Disks
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "pod-disks-7204" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.014 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:30.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3531" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:30.932: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 23 lines ...
Oct 10 15:46:26.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 10 15:46:27.557: INFO: Waiting up to 5m0s for pod "pod-f6c93010-f15a-424b-b802-150f181b0ac0" in namespace "emptydir-326" to be "Succeeded or Failed"
Oct 10 15:46:27.701: INFO: Pod "pod-f6c93010-f15a-424b-b802-150f181b0ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.577222ms
Oct 10 15:46:29.845: INFO: Pod "pod-f6c93010-f15a-424b-b802-150f181b0ac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287647424s
Oct 10 15:46:31.990: INFO: Pod "pod-f6c93010-f15a-424b-b802-150f181b0ac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433094745s
STEP: Saw pod success
Oct 10 15:46:31.990: INFO: Pod "pod-f6c93010-f15a-424b-b802-150f181b0ac0" satisfied condition "Succeeded or Failed"
Oct 10 15:46:32.134: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-f6c93010-f15a-424b-b802-150f181b0ac0 container test-container: <nil>
STEP: delete the pod
Oct 10 15:46:32.483: INFO: Waiting for pod pod-f6c93010-f15a-424b-b802-150f181b0ac0 to disappear
Oct 10 15:46:32.626: INFO: Pod pod-f6c93010-f15a-424b-b802-150f181b0ac0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.226 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:32.943: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 43 lines ...
Oct 10 15:46:14.591: INFO: PersistentVolumeClaim pvc-xjkdn found but phase is Pending instead of Bound.
Oct 10 15:46:16.735: INFO: PersistentVolumeClaim pvc-xjkdn found and phase=Bound (6.58416295s)
Oct 10 15:46:16.735: INFO: Waiting up to 3m0s for PersistentVolume local-txmxt to have phase Bound
Oct 10 15:46:16.878: INFO: PersistentVolume local-txmxt found and phase=Bound (143.161601ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rhxz
STEP: Creating a pod to test subpath
Oct 10 15:46:17.311: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rhxz" in namespace "provisioning-841" to be "Succeeded or Failed"
Oct 10 15:46:17.454: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Pending", Reason="", readiness=false. Elapsed: 142.993632ms
Oct 10 15:46:19.601: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29023135s
Oct 10 15:46:21.754: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442934871s
Oct 10 15:46:23.906: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.595775892s
STEP: Saw pod success
Oct 10 15:46:23.907: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz" satisfied condition "Succeeded or Failed"
Oct 10 15:46:24.059: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rhxz container test-container-subpath-preprovisionedpv-rhxz: <nil>
STEP: delete the pod
Oct 10 15:46:24.361: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rhxz to disappear
Oct 10 15:46:24.517: INFO: Pod pod-subpath-test-preprovisionedpv-rhxz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rhxz
Oct 10 15:46:24.518: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rhxz" in namespace "provisioning-841"
STEP: Creating pod pod-subpath-test-preprovisionedpv-rhxz
STEP: Creating a pod to test subpath
Oct 10 15:46:24.835: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rhxz" in namespace "provisioning-841" to be "Succeeded or Failed"
Oct 10 15:46:24.995: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Pending", Reason="", readiness=false. Elapsed: 159.63337ms
Oct 10 15:46:27.145: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310147734s
Oct 10 15:46:29.289: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.454334266s
STEP: Saw pod success
Oct 10 15:46:29.290: INFO: Pod "pod-subpath-test-preprovisionedpv-rhxz" satisfied condition "Succeeded or Failed"
Oct 10 15:46:29.433: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rhxz container test-container-subpath-preprovisionedpv-rhxz: <nil>
STEP: delete the pod
Oct 10 15:46:29.728: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rhxz to disappear
Oct 10 15:46:29.871: INFO: Pod pod-subpath-test-preprovisionedpv-rhxz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rhxz
Oct 10 15:46:29.872: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rhxz" in namespace "provisioning-841"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
Oct 10 15:46:15.386: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:46:17.387: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 10 15:46:17.530: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6111 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 10 15:46:19.131: INFO: rc: 7
Oct 10 15:46:19.277: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 10 15:46:19.420: INFO: Pod kube-proxy-mode-detector no longer exists
Oct 10 15:46:19.420: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6111 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-6111
Oct 10 15:46:19.574: INFO: sourceip-test cluster ip: 100.69.48.136
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Oct 10 15:46:20.005: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
... skipping 30 lines ...
• [SLOW TEST:27.076 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:924
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:37.325: INFO: Driver local doesn't support ext4 -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-de5a0950-ef9e-492b-937e-3eaf0a9edd0c
STEP: Creating a pod to test consume configMaps
Oct 10 15:46:36.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305" in namespace "configmap-4239" to be "Succeeded or Failed"
Oct 10 15:46:36.281: INFO: Pod "pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305": Phase="Pending", Reason="", readiness=false. Elapsed: 143.21416ms
Oct 10 15:46:38.431: INFO: Pod "pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.293186626s
STEP: Saw pod success
Oct 10 15:46:38.431: INFO: Pod "pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305" satisfied condition "Succeeded or Failed"
Oct 10 15:46:38.574: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305 container agnhost-container: <nil>
STEP: delete the pod
Oct 10 15:46:38.872: INFO: Waiting for pod pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305 to disappear
Oct 10 15:46:39.015: INFO: Pod pod-configmaps-b9d56cdb-47ce-4546-a610-ec4873a03305 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:39.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4239" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:39.869: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 352 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:44.502: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 125 lines ...
• [SLOW TEST:22.472 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:775
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:46.003: INFO: Only supported for providers [vsphere] (not aws)
... skipping 67 lines ...
• [SLOW TEST:24.189 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:582
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":4,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:31.906 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:406
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:48.717: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 151 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Oct 10 15:46:40.183: INFO: Waiting up to 5m0s for pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb" in namespace "pods-4030" to be "Succeeded or Failed"
Oct 10 15:46:40.328: INFO: Pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb": Phase="Pending", Reason="", readiness=false. Elapsed: 144.565938ms
Oct 10 15:46:42.471: INFO: Pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288514819s
Oct 10 15:46:44.615: INFO: Pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432494836s
Oct 10 15:46:46.759: INFO: Pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.576484012s
STEP: Saw pod success
Oct 10 15:46:46.760: INFO: Pod "pod-always-succeed820ed4e1-47af-49f6-84d5-16266b0aacdb" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:49.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:476
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":6,"skipped":22,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:49.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:49.830: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 141 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
STEP: looking for the results for each expected name from probers
Oct 10 15:46:41.931: INFO: Unable to read wheezy_udp@dns-test-service.dns-7190.svc.cluster.local from pod dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e: the server could not find the requested resource (get pods dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e)
Oct 10 15:46:42.075: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7190.svc.cluster.local from pod dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e: the server could not find the requested resource (get pods dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e)
Oct 10 15:46:42.220: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7190.svc.cluster.local from pod dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e: the server could not find the requested resource (get pods dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e)
Oct 10 15:46:42.365: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7190.svc.cluster.local from pod dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e: the server could not find the requested resource (get pods dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e)
Oct 10 15:46:43.381: INFO: Unable to read jessie_udp@dns-test-service.dns-7190.svc.cluster.local from pod dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e: the server could not find the requested resource (get pods dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e)
Oct 10 15:46:44.694: INFO: Lookups using dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e failed for: [wheezy_udp@dns-test-service.dns-7190.svc.cluster.local wheezy_tcp@dns-test-service.dns-7190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7190.svc.cluster.local jessie_udp@dns-test-service.dns-7190.svc.cluster.local]

Oct 10 15:46:52.871: INFO: DNS probes using dns-7190/dns-test-7240e985-dd44-4f82-a3b8-e9291275df0e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:47.437 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:53.643: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Oct 10 15:46:53.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 10 15:46:54.521: INFO: Waiting up to 5m0s for pod "pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6" in namespace "emptydir-5606" to be "Succeeded or Failed"
Oct 10 15:46:54.665: INFO: Pod "pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6": Phase="Pending", Reason="", readiness=false. Elapsed: 143.906057ms
Oct 10 15:46:56.810: INFO: Pod "pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288745494s
STEP: Saw pod success
Oct 10 15:46:56.810: INFO: Pod "pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6" satisfied condition "Succeeded or Failed"
Oct 10 15:46:56.954: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6 container test-container: <nil>
STEP: delete the pod
Oct 10 15:46:57.271: INFO: Waiting for pod pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6 to disappear
Oct 10 15:46:57.415: INFO: Pod pod-d62f9bda-7742-40c0-9ec9-0c7286a9a8c6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:46:57.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5606" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:46:57.721: INFO: Only supported for providers [gce gke] (not aws)
... skipping 217 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 76 lines ...
Oct 10 15:46:14.892: INFO: PersistentVolumeClaim csi-hostpathvs6x6 found but phase is Pending instead of Bound.
Oct 10 15:46:17.035: INFO: PersistentVolumeClaim csi-hostpathvs6x6 found but phase is Pending instead of Bound.
Oct 10 15:46:19.179: INFO: PersistentVolumeClaim csi-hostpathvs6x6 found but phase is Pending instead of Bound.
Oct 10 15:46:21.324: INFO: PersistentVolumeClaim csi-hostpathvs6x6 found and phase=Bound (40.894260424s)
STEP: Creating pod pod-subpath-test-dynamicpv-lhwq
STEP: Creating a pod to test subpath
Oct 10 15:46:21.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-lhwq" in namespace "provisioning-2620" to be "Succeeded or Failed"
Oct 10 15:46:21.913: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 143.660334ms
Oct 10 15:46:24.061: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291245193s
Oct 10 15:46:26.205: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43524053s
Oct 10 15:46:28.349: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578713724s
Oct 10 15:46:30.496: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726193323s
Oct 10 15:46:32.640: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870604505s
Oct 10 15:46:34.785: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.015045014s
Oct 10 15:46:36.930: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.159752064s
Oct 10 15:46:39.074: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.304199957s
Oct 10 15:46:41.219: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Pending", Reason="", readiness=false. Elapsed: 19.448756337s
Oct 10 15:46:43.362: INFO: Pod "pod-subpath-test-dynamicpv-lhwq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.592443048s
STEP: Saw pod success
Oct 10 15:46:43.362: INFO: Pod "pod-subpath-test-dynamicpv-lhwq" satisfied condition "Succeeded or Failed"
Oct 10 15:46:43.506: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-lhwq container test-container-subpath-dynamicpv-lhwq: <nil>
STEP: delete the pod
Oct 10 15:46:43.811: INFO: Waiting for pod pod-subpath-test-dynamicpv-lhwq to disappear
Oct 10 15:46:43.954: INFO: Pod pod-subpath-test-dynamicpv-lhwq no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-lhwq
Oct 10 15:46:43.954: INFO: Deleting pod "pod-subpath-test-dynamicpv-lhwq" in namespace "provisioning-2620"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":8,"skipped":36,"failed":0}

SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:46:59.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 10 15:47:00.117: INFO: Waiting up to 5m0s for pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5" in namespace "downward-api-1465" to be "Succeeded or Failed"
Oct 10 15:47:00.260: INFO: Pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 143.018637ms
Oct 10 15:47:02.405: INFO: Pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287281878s
Oct 10 15:47:04.548: INFO: Pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430663298s
Oct 10 15:47:06.692: INFO: Pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574926978s
STEP: Saw pod success
Oct 10 15:47:06.692: INFO: Pod "downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5" satisfied condition "Succeeded or Failed"
Oct 10 15:47:06.836: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5 container dapi-container: <nil>
STEP: delete the pod
Oct 10 15:47:07.151: INFO: Waiting for pod downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5 to disappear
Oct 10 15:47:07.294: INFO: Pod downward-api-b2b3b683-9684-409d-969a-8c526a13d2b5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.328 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:07.594: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 109 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-d7598acf-2b87-490d-9bf5-fb6becd77def
STEP: Creating a pod to test consume secrets
Oct 10 15:47:02.420: INFO: Waiting up to 5m0s for pod "pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715" in namespace "secrets-2509" to be "Succeeded or Failed"
Oct 10 15:47:02.563: INFO: Pod "pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715": Phase="Pending", Reason="", readiness=false. Elapsed: 143.822839ms
Oct 10 15:47:04.708: INFO: Pod "pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28796867s
Oct 10 15:47:06.852: INFO: Pod "pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432404334s
STEP: Saw pod success
Oct 10 15:47:06.852: INFO: Pod "pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715" satisfied condition "Succeeded or Failed"
Oct 10 15:47:06.995: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715 container secret-env-test: <nil>
STEP: delete the pod
Oct 10 15:47:07.311: INFO: Waiting for pod pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715 to disappear
Oct 10 15:47:07.454: INFO: Pod pod-secrets-d63a69f3-c3f8-490e-af89-ca0bab9a2715 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.333 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Oct 10 15:47:00.527: INFO: PersistentVolumeClaim pvc-8qsq4 found but phase is Pending instead of Bound.
Oct 10 15:47:02.671: INFO: PersistentVolumeClaim pvc-8qsq4 found and phase=Bound (10.880716034s)
Oct 10 15:47:02.671: INFO: Waiting up to 3m0s for PersistentVolume local-rhdlr to have phase Bound
Oct 10 15:47:02.814: INFO: PersistentVolume local-rhdlr found and phase=Bound (143.15385ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9v6v
STEP: Creating a pod to test subpath
Oct 10 15:47:03.246: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9v6v" in namespace "provisioning-1405" to be "Succeeded or Failed"
Oct 10 15:47:03.390: INFO: Pod "pod-subpath-test-preprovisionedpv-9v6v": Phase="Pending", Reason="", readiness=false. Elapsed: 143.790347ms
Oct 10 15:47:05.534: INFO: Pod "pod-subpath-test-preprovisionedpv-9v6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287510861s
Oct 10 15:47:07.678: INFO: Pod "pod-subpath-test-preprovisionedpv-9v6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432051297s
STEP: Saw pod success
Oct 10 15:47:07.678: INFO: Pod "pod-subpath-test-preprovisionedpv-9v6v" satisfied condition "Succeeded or Failed"
Oct 10 15:47:07.822: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-9v6v container test-container-subpath-preprovisionedpv-9v6v: <nil>
STEP: delete the pod
Oct 10 15:47:08.139: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9v6v to disappear
Oct 10 15:47:08.282: INFO: Pod pod-subpath-test-preprovisionedpv-9v6v no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9v6v
Oct 10 15:47:08.282: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9v6v" in namespace "provisioning-1405"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:8.594 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:11.725: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:8.845 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":46,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:15.385: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 201 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:47:16.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:47:17.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1093" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:17.638: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
• [SLOW TEST:10.844 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:18.619: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Oct 10 15:47:11.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Oct 10 15:47:12.612: INFO: Waiting up to 5m0s for pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57" in namespace "security-context-5081" to be "Succeeded or Failed"
Oct 10 15:47:12.759: INFO: Pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57": Phase="Pending", Reason="", readiness=false. Elapsed: 146.61389ms
Oct 10 15:47:14.903: INFO: Pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290856173s
Oct 10 15:47:17.047: INFO: Pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434782944s
Oct 10 15:47:19.192: INFO: Pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.579564425s
STEP: Saw pod success
Oct 10 15:47:19.192: INFO: Pod "security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57" satisfied condition "Succeeded or Failed"
Oct 10 15:47:19.335: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57 container test-container: <nil>
STEP: delete the pod
Oct 10 15:47:19.630: INFO: Waiting for pod security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57 to disappear
Oct 10 15:47:19.774: INFO: Pod security-context-b10a4cc5-452b-4eb8-983c-09f743b63d57 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.315 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:20.082: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 42 lines ...
• [SLOW TEST:53.063 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:24.033: INFO: Only supported for providers [gce gke] (not aws)
... skipping 159 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":2,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Oct 10 15:46:51.993: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:52.144: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:52.581: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:52.725: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:52.869: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:53.013: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:53.301: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:46:58.477: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:58.625: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:58.769: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:58.913: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:59.345: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:59.490: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:59.633: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:46:59.777: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:00.072: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:47:03.448: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:03.591: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:03.735: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:03.882: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:04.317: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:04.461: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:04.609: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:04.753: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:05.044: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:47:08.448: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:08.592: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:08.735: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:08.879: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:09.328: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:09.473: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:09.617: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:09.760: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:10.048: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:47:13.447: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:13.593: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:13.737: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:13.883: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:14.315: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:14.459: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:14.603: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:14.747: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:15.034: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:47:18.457: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:18.602: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:18.746: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:18.890: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:19.326: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:19.472: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:19.616: INFO: Unable to read jessie_udp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:19.760: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local from pod dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5: the server could not find the requested resource (get pods dns-test-4240213d-8bee-4c36-ad80-f156063038b5)
Oct 10 15:47:20.050: INFO: Lookups using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-809.svc.cluster.local jessie_udp@dns-test-service-2.dns-809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-809.svc.cluster.local]

Oct 10 15:47:25.052: INFO: DNS probes using dns-809/dns-test-4240213d-8bee-4c36-ad80-f156063038b5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:39.623 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:46:58.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 67 lines ...
• [SLOW TEST:28.356 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 65 lines ...
STEP: Destroying namespace "services-4966" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 35 lines ...
STEP: Deleting pod hostexec-ip-172-20-33-168.sa-east-1.compute.internal-sw589 in namespace volumemode-73
Oct 10 15:47:18.455: INFO: Deleting pod "pod-c2c93d4b-39ea-4abb-b47b-47e944397f72" in namespace "volumemode-73"
Oct 10 15:47:18.599: INFO: Wait up to 5m0s for pod "pod-c2c93d4b-39ea-4abb-b47b-47e944397f72" to be fully deleted
STEP: Deleting pv and pvc
Oct 10 15:47:22.887: INFO: Deleting PersistentVolumeClaim "pvc-gfwqd"
Oct 10 15:47:23.032: INFO: Deleting PersistentVolume "aws-dp64n"
Oct 10 15:47:23.428: INFO: Couldn't delete PD "aws://sa-east-1a/vol-0c8e987fb8c1aa672", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c8e987fb8c1aa672 is currently attached to i-01abff52cbaf0c001
	status code: 400, request id: c1fc9f88-0ef1-415c-99e7-8987888d0a5f
Oct 10 15:47:29.191: INFO: Successfully deleted PD "aws://sa-east-1a/vol-0c8e987fb8c1aa672".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:47:29.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-73" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 82 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:512
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command","total":-1,"completed":6,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:30.989: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:31.530: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:47:26.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 10 15:47:27.236: INFO: Waiting up to 5m0s for pod "pod-1e659f28-9497-4982-ab08-ac59227b4044" in namespace "emptydir-3947" to be "Succeeded or Failed"
Oct 10 15:47:27.379: INFO: Pod "pod-1e659f28-9497-4982-ab08-ac59227b4044": Phase="Pending", Reason="", readiness=false. Elapsed: 143.192363ms
Oct 10 15:47:29.523: INFO: Pod "pod-1e659f28-9497-4982-ab08-ac59227b4044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28703983s
Oct 10 15:47:31.666: INFO: Pod "pod-1e659f28-9497-4982-ab08-ac59227b4044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430576356s
STEP: Saw pod success
Oct 10 15:47:31.667: INFO: Pod "pod-1e659f28-9497-4982-ab08-ac59227b4044" satisfied condition "Succeeded or Failed"
Oct 10 15:47:31.810: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-1e659f28-9497-4982-ab08-ac59227b4044 container test-container: <nil>
STEP: delete the pod
Oct 10 15:47:32.127: INFO: Waiting for pod pod-1e659f28-9497-4982-ab08-ac59227b4044 to disappear
Oct 10 15:47:32.270: INFO: Pod pod-1e659f28-9497-4982-ab08-ac59227b4044 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.189 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:32.573: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
• [SLOW TEST:9.071 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":3,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:34.192: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 65 lines ...
• [SLOW TEST:10.196 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":6,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":5,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:34.333: INFO: Only supported for providers [azure] (not aws)
... skipping 84 lines ...
STEP: Deleting pod aws-client in namespace volume-7452
Oct 10 15:47:15.176: INFO: Waiting for pod aws-client to disappear
Oct 10 15:47:15.319: INFO: Pod aws-client still exists
Oct 10 15:47:17.320: INFO: Waiting for pod aws-client to disappear
Oct 10 15:47:17.464: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Oct 10 15:47:17.740: INFO: Couldn't delete PD "aws://sa-east-1a/vol-029e18241351bdb94", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-029e18241351bdb94 is currently attached to i-081885fd3bdb73a5b
	status code: 400, request id: 98115573-e378-42f6-9d77-66f609524eb3
Oct 10 15:47:23.447: INFO: Couldn't delete PD "aws://sa-east-1a/vol-029e18241351bdb94", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-029e18241351bdb94 is currently attached to i-081885fd3bdb73a5b
	status code: 400, request id: ff9b49f4-9749-4309-b513-9305e2966ef6
Oct 10 15:47:29.263: INFO: Couldn't delete PD "aws://sa-east-1a/vol-029e18241351bdb94", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-029e18241351bdb94 is currently attached to i-081885fd3bdb73a5b
	status code: 400, request id: 672d226e-641c-4099-bb02-81b2877d09fb
Oct 10 15:47:35.059: INFO: Successfully deleted PD "aws://sa-east-1a/vol-029e18241351bdb94".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:47:35.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7452" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:5.198 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support CSR API operations [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:35.709: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:47:36.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:36.196: INFO: Only supported for providers [vsphere] (not aws)
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:47:36.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-4882" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:36.544: INFO: Only supported for providers [openstack] (not aws)
... skipping 38 lines ...
• [SLOW TEST:10.522 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:44.900: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
• [SLOW TEST:12.460 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":10,"skipped":77,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:45.082: INFO: Only supported for providers [vsphere] (not aws)
... skipping 121 lines ...
Oct 10 15:47:04.951: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:05.941: INFO: Exec stderr: ""
Oct 10 15:47:08.376: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4303"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4303"/host; echo host > "/var/lib/kubelet/mount-propagation-4303"/host/file] Namespace:mount-propagation-4303 PodName:hostexec-ip-172-20-61-156.sa-east-1.compute.internal-nrrmh ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 10 15:47:08.376: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:09.509: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4303 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:09.509: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:10.597: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Oct 10 15:47:10.740: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4303 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:10.740: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:11.936: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:12.079: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4303 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:12.079: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:13.054: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:13.197: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4303 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:13.197: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:14.147: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:14.290: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4303 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:14.290: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:15.252: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Oct 10 15:47:15.395: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4303 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:15.395: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:16.375: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Oct 10 15:47:16.518: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4303 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:16.519: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:17.474: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Oct 10 15:47:17.617: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4303 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:17.617: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:18.578: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:18.723: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4303 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:18.723: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:19.696: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:19.839: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4303 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:19.839: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:20.776: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Oct 10 15:47:20.919: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4303 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:20.919: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:21.935: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:22.079: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4303 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:22.079: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:23.022: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:23.165: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4303 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:23.166: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:24.109: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Oct 10 15:47:24.252: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4303 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:24.252: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:25.219: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:25.363: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4303 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:25.363: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:26.562: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:26.715: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4303 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:26.715: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:27.799: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:27.942: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4303 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:27.942: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:28.980: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:29.123: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4303 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:29.123: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:30.157: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:30.312: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4303 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:30.312: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:31.329: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Oct 10 15:47:31.472: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4303 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 10 15:47:31.472: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:32.498: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 10 15:47:32.498: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-4303 PodName:hostexec-ip-172-20-61-156.sa-east-1.compute.internal-nrrmh ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 10 15:47:32.498: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:33.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4449 -m cat "/var/lib/kubelet/mount-propagation-4303/host/file"] Namespace:mount-propagation-4303 PodName:hostexec-ip-172-20-61-156.sa-east-1.compute.internal-nrrmh ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 10 15:47:33.465: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:34.427: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4449 -m cat "/var/lib/kubelet/mount-propagation-4303/master/file"] Namespace:mount-propagation-4303 PodName:hostexec-ip-172-20-61-156.sa-east-1.compute.internal-nrrmh ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 10 15:47:34.428: INFO: >>> kubeConfig: /root/.kube/config
... skipping 29 lines ...
• [SLOW TEST:86.660 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts within defined scopes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:83
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":3,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:14.190 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":4,"skipped":37,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:48.486: INFO: Only supported for providers [vsphere] (not aws)
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-xpfr
STEP: Creating a pod to test atomic-volume-subpath
Oct 10 15:47:17.686: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xpfr" in namespace "subpath-6277" to be "Succeeded or Failed"
Oct 10 15:47:17.829: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 142.943734ms
Oct 10 15:47:19.977: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290652655s
Oct 10 15:47:22.121: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.435076677s
Oct 10 15:47:24.266: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579430115s
Oct 10 15:47:26.409: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 8.723313873s
Oct 10 15:47:28.555: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 10.86837229s
... skipping 4 lines ...
Oct 10 15:47:39.296: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 21.609744482s
Oct 10 15:47:41.440: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 23.753932795s
Oct 10 15:47:43.585: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 25.898715139s
Oct 10 15:47:45.728: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Running", Reason="", readiness=true. Elapsed: 28.04215611s
Oct 10 15:47:47.872: INFO: Pod "pod-subpath-test-configmap-xpfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.185692657s
STEP: Saw pod success
Oct 10 15:47:47.872: INFO: Pod "pod-subpath-test-configmap-xpfr" satisfied condition "Succeeded or Failed"
Oct 10 15:47:48.015: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-configmap-xpfr container test-container-subpath-configmap-xpfr: <nil>
STEP: delete the pod
Oct 10 15:47:48.327: INFO: Waiting for pod pod-subpath-test-configmap-xpfr to disappear
Oct 10 15:47:48.469: INFO: Pod pod-subpath-test-configmap-xpfr no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xpfr
Oct 10 15:47:48.470: INFO: Deleting pod "pod-subpath-test-configmap-xpfr" in namespace "subpath-6277"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:48.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
• [SLOW TEST:22.319 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 82 lines ...
Oct 10 15:47:00.402: INFO: PersistentVolumeClaim csi-hostpath4vctf found but phase is Pending instead of Bound.
Oct 10 15:47:02.547: INFO: PersistentVolumeClaim csi-hostpath4vctf found but phase is Pending instead of Bound.
Oct 10 15:47:04.691: INFO: PersistentVolumeClaim csi-hostpath4vctf found but phase is Pending instead of Bound.
Oct 10 15:47:06.835: INFO: PersistentVolumeClaim csi-hostpath4vctf found and phase=Bound (53.765617626s)
STEP: Creating pod pod-subpath-test-dynamicpv-kgnc
STEP: Creating a pod to test atomic-volume-subpath
Oct 10 15:47:07.269: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kgnc" in namespace "provisioning-5956" to be "Succeeded or Failed"
Oct 10 15:47:07.412: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.612236ms
Oct 10 15:47:09.556: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28723863s
Oct 10 15:47:11.700: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431277314s
Oct 10 15:47:13.845: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576508884s
Oct 10 15:47:15.990: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 8.720915232s
Oct 10 15:47:18.134: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 10.865158171s
... skipping 3 lines ...
Oct 10 15:47:26.717: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 19.448087721s
Oct 10 15:47:28.861: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 21.592146158s
Oct 10 15:47:31.004: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 23.73569485s
Oct 10 15:47:33.149: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Running", Reason="", readiness=true. Elapsed: 25.879936648s
Oct 10 15:47:35.293: INFO: Pod "pod-subpath-test-dynamicpv-kgnc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.023971532s
STEP: Saw pod success
Oct 10 15:47:35.293: INFO: Pod "pod-subpath-test-dynamicpv-kgnc" satisfied condition "Succeeded or Failed"
Oct 10 15:47:35.436: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-kgnc container test-container-subpath-dynamicpv-kgnc: <nil>
STEP: delete the pod
Oct 10 15:47:35.736: INFO: Waiting for pod pod-subpath-test-dynamicpv-kgnc to disappear
Oct 10 15:47:35.879: INFO: Pod pod-subpath-test-dynamicpv-kgnc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kgnc
Oct 10 15:47:35.879: INFO: Deleting pod "pod-subpath-test-dynamicpv-kgnc" in namespace "provisioning-5956"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:55.054: INFO: Only supported for providers [gce gke] (not aws)
... skipping 65 lines ...
Oct 10 15:47:44.608: INFO: PersistentVolumeClaim pvc-9wstn found but phase is Pending instead of Bound.
Oct 10 15:47:46.752: INFO: PersistentVolumeClaim pvc-9wstn found and phase=Bound (4.432514369s)
Oct 10 15:47:46.752: INFO: Waiting up to 3m0s for PersistentVolume local-99zmq to have phase Bound
Oct 10 15:47:46.895: INFO: PersistentVolume local-99zmq found and phase=Bound (143.132033ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6pz2
STEP: Creating a pod to test exec-volume-test
Oct 10 15:47:47.327: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6pz2" in namespace "volume-2054" to be "Succeeded or Failed"
Oct 10 15:47:47.470: INFO: Pod "exec-volume-test-preprovisionedpv-6pz2": Phase="Pending", Reason="", readiness=false. Elapsed: 143.089998ms
Oct 10 15:47:49.614: INFO: Pod "exec-volume-test-preprovisionedpv-6pz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287323638s
Oct 10 15:47:51.759: INFO: Pod "exec-volume-test-preprovisionedpv-6pz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431700358s
STEP: Saw pod success
Oct 10 15:47:51.759: INFO: Pod "exec-volume-test-preprovisionedpv-6pz2" satisfied condition "Succeeded or Failed"
Oct 10 15:47:51.907: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod exec-volume-test-preprovisionedpv-6pz2 container exec-container-preprovisionedpv-6pz2: <nil>
STEP: delete the pod
Oct 10 15:47:52.215: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6pz2 to disappear
Oct 10 15:47:52.359: INFO: Pod exec-volume-test-preprovisionedpv-6pz2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6pz2
Oct 10 15:47:52.359: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6pz2" in namespace "volume-2054"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:23.741 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":9,"skipped":54,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:47:59.970: INFO: Driver local doesn't support ext4 -- skipping
... skipping 84 lines ...
• [SLOW TEST:107.588 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:00.980: INFO: Only supported for providers [gce gke] (not aws)
... skipping 104 lines ...
Oct 10 15:48:01.042: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.022 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should contain last line of the log
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:615
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":7,"skipped":75,"failed":0}
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:01.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apf
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 73 lines ...
• [SLOW TEST:15.196 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":5,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:03.718: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 135 lines ...
• [SLOW TEST:19.128 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:45:50.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
Oct 10 15:45:56.006: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:45:58.006: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 10 15:45:58.150: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8625 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 10 15:45:59.629: INFO: rc: 7
Oct 10 15:45:59.777: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 10 15:45:59.923: INFO: Pod kube-proxy-mode-detector no longer exists
Oct 10 15:45:59.923: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8625 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-nodeport-timeout in namespace services-8625
STEP: creating replication controller affinity-nodeport-timeout in namespace services-8625
I1010 15:46:00.217080    5389 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8625, replica count: 3
I1010 15:46:03.368721    5389 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1010 15:46:06.369759    5389 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 62 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:04.111: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 96 lines ...
• [SLOW TEST:76.359 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:05.193: INFO: Only supported for providers [gce gke] (not aws)
... skipping 65 lines ...
Oct 10 15:47:15.096: INFO: PersistentVolumeClaim pvc-5p9cm found and phase=Bound (142.710707ms)
Oct 10 15:47:15.096: INFO: Waiting up to 3m0s for PersistentVolume nfs-cbjjr to have phase Bound
Oct 10 15:47:15.238: INFO: PersistentVolume nfs-cbjjr found and phase=Bound (142.558734ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Oct 10 15:47:15.667: INFO: Waiting up to 5m0s for pod "pvc-tester-kn8mp" in namespace "pv-7846" to be "Succeeded or Failed"
Oct 10 15:47:15.810: INFO: Pod "pvc-tester-kn8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 142.601956ms
Oct 10 15:47:17.955: INFO: Pod "pvc-tester-kn8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287240317s
Oct 10 15:47:20.099: INFO: Pod "pvc-tester-kn8mp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43140176s
Oct 10 15:47:22.245: INFO: Pod "pvc-tester-kn8mp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.577963534s
STEP: Saw pod success
Oct 10 15:47:22.245: INFO: Pod "pvc-tester-kn8mp" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Oct 10 15:47:22.246: INFO: Deleting pod "pvc-tester-kn8mp" in namespace "pv-7846"
Oct 10 15:47:22.392: INFO: Wait up to 5m0s for pod "pvc-tester-kn8mp" to be fully deleted
Oct 10 15:47:22.538: INFO: Deleting PVC pvc-5p9cm to trigger reclamation of PV 
Oct 10 15:47:22.538: INFO: Deleting PersistentVolumeClaim "pvc-5p9cm"
Oct 10 15:47:22.684: INFO: Waiting for reclaim process to complete.
... skipping 6 lines ...
Oct 10 15:47:33.551: INFO: PersistentVolume nfs-cbjjr found and phase=Available (10.867302895s)
Oct 10 15:47:33.694: INFO: PV nfs-cbjjr now in "Available" phase
STEP: Re-mounting the volume.
Oct 10 15:47:33.840: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-kfs4n] to have phase Bound
Oct 10 15:47:33.983: INFO: PersistentVolumeClaim pvc-kfs4n found and phase=Bound (142.575519ms)
STEP: Verifying the mount has been cleaned.
Oct 10 15:47:34.127: INFO: Waiting up to 5m0s for pod "pvc-tester-27bvt" in namespace "pv-7846" to be "Succeeded or Failed"
Oct 10 15:47:34.269: INFO: Pod "pvc-tester-27bvt": Phase="Pending", Reason="", readiness=false. Elapsed: 142.670517ms
Oct 10 15:47:36.414: INFO: Pod "pvc-tester-27bvt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286915836s
Oct 10 15:47:38.566: INFO: Pod "pvc-tester-27bvt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4396889s
Oct 10 15:47:40.713: INFO: Pod "pvc-tester-27bvt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586410685s
Oct 10 15:47:42.862: INFO: Pod "pvc-tester-27bvt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735463908s
Oct 10 15:47:45.006: INFO: Pod "pvc-tester-27bvt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.879032607s
STEP: Saw pod success
Oct 10 15:47:45.006: INFO: Pod "pvc-tester-27bvt" satisfied condition "Succeeded or Failed"
Oct 10 15:47:45.006: INFO: Deleting pod "pvc-tester-27bvt" in namespace "pv-7846"
Oct 10 15:47:45.155: INFO: Wait up to 5m0s for pod "pvc-tester-27bvt" to be fully deleted
Oct 10 15:47:45.297: INFO: Pod exited without failure; the volume has been recycled.
Oct 10 15:47:45.297: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Oct 10 15:47:45.297: INFO: Deleting PVC pvc-kfs4n to trigger reclamation of PV 
Oct 10 15:47:45.297: INFO: Deleting PersistentVolumeClaim "pvc-kfs4n"
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":5,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct 10 15:47:45.248: INFO: PersistentVolumeClaim pvc-vvk2z found but phase is Pending instead of Bound.
Oct 10 15:47:47.392: INFO: PersistentVolumeClaim pvc-vvk2z found and phase=Bound (15.159004781s)
Oct 10 15:47:47.392: INFO: Waiting up to 3m0s for PersistentVolume local-bm9xg to have phase Bound
Oct 10 15:47:47.537: INFO: PersistentVolume local-bm9xg found and phase=Bound (144.738398ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xwxv
STEP: Creating a pod to test subpath
Oct 10 15:47:47.971: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xwxv" in namespace "provisioning-9475" to be "Succeeded or Failed"
Oct 10 15:47:48.115: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 143.694755ms
Oct 10 15:47:50.264: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292609696s
Oct 10 15:47:52.410: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438328106s
Oct 10 15:47:54.554: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582443313s
Oct 10 15:47:56.699: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727450645s
Oct 10 15:47:58.843: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871309729s
Oct 10 15:48:00.987: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.01565334s
Oct 10 15:48:03.134: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Pending", Reason="", readiness=false. Elapsed: 15.16264644s
Oct 10 15:48:05.279: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.307572811s
STEP: Saw pod success
Oct 10 15:48:05.279: INFO: Pod "pod-subpath-test-preprovisionedpv-xwxv" satisfied condition "Succeeded or Failed"
Oct 10 15:48:05.423: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-xwxv container test-container-volume-preprovisionedpv-xwxv: <nil>
STEP: delete the pod
Oct 10 15:48:05.746: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xwxv to disappear
Oct 10 15:48:05.889: INFO: Pod pod-subpath-test-preprovisionedpv-xwxv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xwxv
Oct 10 15:48:05.889: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xwxv" in namespace "provisioning-9475"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:07.973: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 185 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 32 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:08.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":6,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:08.771: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
Oct 10 15:47:27.656: INFO: PersistentVolumeClaim pvc-c4dsn found but phase is Pending instead of Bound.
Oct 10 15:47:29.801: INFO: PersistentVolumeClaim pvc-c4dsn found and phase=Bound (2.288170357s)
STEP: Deleting the previously created pod
Oct 10 15:47:42.522: INFO: Deleting pod "pvc-volume-tester-9nc9m" in namespace "csi-mock-volumes-6424"
Oct 10 15:47:42.666: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9nc9m" to be fully deleted
STEP: Checking CSI driver logs
Oct 10 15:47:45.104: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/28cb666d-dc7f-4d2e-975f-c90de1681314/volumes/kubernetes.io~csi/pvc-7c2d99cc-32b6-448d-a58c-c4322f364536/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-9nc9m
Oct 10 15:47:45.104: INFO: Deleting pod "pvc-volume-tester-9nc9m" in namespace "csi-mock-volumes-6424"
STEP: Deleting claim pvc-c4dsn
Oct 10 15:47:45.535: INFO: Waiting up to 2m0s for PersistentVolume pvc-7c2d99cc-32b6-448d-a58c-c4322f364536 to get deleted
Oct 10 15:47:45.679: INFO: PersistentVolume pvc-7c2d99cc-32b6-448d-a58c-c4322f364536 was removed
STEP: Deleting storageclass csi-mock-volumes-6424-scsvtgk
... skipping 55 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:47:58.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447" in namespace "downward-api-2580" to be "Succeeded or Failed"
Oct 10 15:47:58.208: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Pending", Reason="", readiness=false. Elapsed: 142.732969ms
Oct 10 15:48:00.357: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291145752s
Oct 10 15:48:02.503: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437635058s
Oct 10 15:48:04.647: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581444407s
Oct 10 15:48:06.791: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725661087s
Oct 10 15:48:08.946: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880382028s
STEP: Saw pod success
Oct 10 15:48:08.946: INFO: Pod "downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447" satisfied condition "Succeeded or Failed"
Oct 10 15:48:09.091: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447 container client-container: <nil>
STEP: delete the pod
Oct 10 15:48:09.415: INFO: Waiting for pod downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447 to disappear
Oct 10 15:48:09.563: INFO: Pod downwardapi-volume-5a7f6dc2-3ff2-4bbc-bcd7-0dcd918a9447 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.649 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:09.870: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:48:05.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37" in namespace "downward-api-8413" to be "Succeeded or Failed"
Oct 10 15:48:05.145: INFO: Pod "downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37": Phase="Pending", Reason="", readiness=false. Elapsed: 144.764702ms
Oct 10 15:48:07.290: INFO: Pod "downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289595673s
Oct 10 15:48:09.437: INFO: Pod "downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43654508s
STEP: Saw pod success
Oct 10 15:48:09.437: INFO: Pod "downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37" satisfied condition "Succeeded or Failed"
Oct 10 15:48:09.588: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37 container client-container: <nil>
STEP: delete the pod
Oct 10 15:48:09.903: INFO: Waiting for pod downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37 to disappear
Oct 10 15:48:10.046: INFO: Pod downwardapi-volume-137d8aa6-2469-409f-9b13-1f77ff135f37 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.216 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:10.358: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:47:05.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:10.506: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:11.070 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should delete a collection of pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":10,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:12.174: INFO: Only supported for providers [gce gke] (not aws)
... skipping 83 lines ...
• [SLOW TEST:8.475 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:14.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5718" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":9,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:14.585: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct 10 15:48:08.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 10 15:48:08.937: INFO: Waiting up to 5m0s for pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290" in namespace "security-context-4742" to be "Succeeded or Failed"
Oct 10 15:48:09.088: INFO: Pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290": Phase="Pending", Reason="", readiness=false. Elapsed: 150.065229ms
Oct 10 15:48:11.232: INFO: Pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294020734s
Oct 10 15:48:13.377: INFO: Pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439045896s
Oct 10 15:48:15.523: INFO: Pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585323289s
STEP: Saw pod success
Oct 10 15:48:15.523: INFO: Pod "security-context-909ea4cd-f1c7-4885-a841-dde9df62f290" satisfied condition "Succeeded or Failed"
Oct 10 15:48:15.698: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod security-context-909ea4cd-f1c7-4885-a841-dde9df62f290 container test-container: <nil>
STEP: delete the pod
Oct 10 15:48:16.172: INFO: Waiting for pod security-context-909ea4cd-f1c7-4885-a841-dde9df62f290 to disappear
Oct 10 15:48:16.315: INFO: Pod security-context-909ea4cd-f1c7-4885-a841-dde9df62f290 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.534 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 109 lines ...
• [SLOW TEST:25.130 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:17.017: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:18.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-1299" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":7,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:18.513: INFO: Only supported for providers [vsphere] (not aws)
... skipping 70 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-cee5a809-96cb-4084-ad02-e84bd1cd593d
STEP: Creating a pod to test consume secrets
Oct 10 15:48:11.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c" in namespace "projected-2170" to be "Succeeded or Failed"
Oct 10 15:48:11.564: INFO: Pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.863956ms
Oct 10 15:48:13.711: INFO: Pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291107055s
Oct 10 15:48:15.867: INFO: Pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446990326s
Oct 10 15:48:18.011: INFO: Pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.5909876s
STEP: Saw pod success
Oct 10 15:48:18.011: INFO: Pod "pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c" satisfied condition "Succeeded or Failed"
Oct 10 15:48:18.155: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 10 15:48:18.480: INFO: Waiting for pod pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c to disappear
Oct 10 15:48:18.623: INFO: Pod pod-projected-secrets-2535b8fd-4fcb-4f91-9939-21796555b28c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.507 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:9.831 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":9,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:19.719: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
Oct 10 15:48:11.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Oct 10 15:48:12.814: INFO: Waiting up to 5m0s for pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1" in namespace "svcaccounts-8656" to be "Succeeded or Failed"
Oct 10 15:48:12.967: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Pending", Reason="", readiness=false. Elapsed: 152.99383ms
Oct 10 15:48:15.115: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300919451s
Oct 10 15:48:17.260: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445625115s
Oct 10 15:48:19.404: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589913638s
Oct 10 15:48:21.549: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734919496s
Oct 10 15:48:23.694: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.879340672s
STEP: Saw pod success
Oct 10 15:48:23.694: INFO: Pod "test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1" satisfied condition "Succeeded or Failed"
Oct 10 15:48:23.838: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1 container agnhost-container: <nil>
STEP: delete the pod
Oct 10 15:48:24.180: INFO: Waiting for pod test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1 to disappear
Oct 10 15:48:24.332: INFO: Pod test-pod-bd0ab8c1-d3cd-4846-9038-9ef7b465bda1 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.687 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:48:19.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0" in namespace "projected-6790" to be "Succeeded or Failed"
Oct 10 15:48:19.939: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 143.120131ms
Oct 10 15:48:22.097: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301027554s
Oct 10 15:48:24.241: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444743397s
Oct 10 15:48:26.385: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588429721s
Oct 10 15:48:28.529: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.733317083s
STEP: Saw pod success
Oct 10 15:48:28.530: INFO: Pod "downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0" satisfied condition "Succeeded or Failed"
Oct 10 15:48:28.673: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0 container client-container: <nil>
STEP: delete the pod
Oct 10 15:48:28.967: INFO: Waiting for pod downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0 to disappear
Oct 10 15:48:29.110: INFO: Pod downwardapi-volume-b84f26c2-c81e-4dd8-a089-6438558acaf0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
Oct 10 15:47:46.853: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-953z7lqk
STEP: creating a claim
Oct 10 15:47:46.996: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-8cbx
STEP: Creating a pod to test subpath
Oct 10 15:47:47.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-8cbx" in namespace "provisioning-953" to be "Succeeded or Failed"
Oct 10 15:47:47.571: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 143.184735ms
Oct 10 15:47:49.715: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287322669s
Oct 10 15:47:51.860: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431568808s
Oct 10 15:47:54.006: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578275873s
Oct 10 15:47:56.151: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723019902s
Oct 10 15:47:58.298: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.869639332s
Oct 10 15:48:00.441: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.013064398s
Oct 10 15:48:02.585: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 15.157331048s
Oct 10 15:48:04.729: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.300462998s
Oct 10 15:48:06.873: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.445152694s
Oct 10 15:48:09.017: INFO: Pod "pod-subpath-test-dynamicpv-8cbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.589057082s
STEP: Saw pod success
Oct 10 15:48:09.017: INFO: Pod "pod-subpath-test-dynamicpv-8cbx" satisfied condition "Succeeded or Failed"
Oct 10 15:48:09.163: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-8cbx container test-container-volume-dynamicpv-8cbx: <nil>
STEP: delete the pod
Oct 10 15:48:09.483: INFO: Waiting for pod pod-subpath-test-dynamicpv-8cbx to disappear
Oct 10 15:48:09.633: INFO: Pod pod-subpath-test-dynamicpv-8cbx no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-8cbx
Oct 10 15:48:09.633: INFO: Deleting pod "pod-subpath-test-dynamicpv-8cbx" in namespace "provisioning-953"
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:29.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:33.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4829" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 130 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":11,"skipped":91,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:33.530: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 218 lines ...
• [SLOW TEST:34.968 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:36.019: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
Oct 10 15:46:50.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-614
Oct 10 15:46:50.547: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-614
Oct 10 15:46:50.690: INFO: creating *v1.StatefulSet: csi-mock-volumes-614-4360/csi-mockplugin
Oct 10 15:46:50.835: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-614
Oct 10 15:46:50.983: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-614"
Oct 10 15:46:51.127: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-614 to register on node ip-172-20-33-168.sa-east-1.compute.internal
I1010 15:47:12.929087    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-614","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1010 15:47:13.706830    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1010 15:47:13.891194    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-614","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1010 15:47:14.083441    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I1010 15:47:14.228330    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1010 15:47:14.431569    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-614","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Oct 10 15:47:18.653: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1010 15:47:18.974068    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1010 15:47:21.656529    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I1010 15:47:23.726680    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1010 15:47:23.873894    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 10 15:47:24.019: INFO: >>> kubeConfig: /root/.kube/config
I1010 15:47:25.019934    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b","storage.kubernetes.io/csiProvisionerIdentity":"1633880834302-8081-csi-mock-csi-mock-volumes-614"}},"Response":{},"Error":"","FullError":null}
I1010 15:47:25.508925    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1010 15:47:25.654867    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 10 15:47:25.798: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:26.771: INFO: >>> kubeConfig: /root/.kube/config
Oct 10 15:47:27.737: INFO: >>> kubeConfig: /root/.kube/config
I1010 15:47:28.738134    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b/globalmount","target_path":"/var/lib/kubelet/pods/2b2677e7-e448-420c-a845-aa0079c6a143/volumes/kubernetes.io~csi/pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b","storage.kubernetes.io/csiProvisionerIdentity":"1633880834302-8081-csi-mock-csi-mock-volumes-614"}},"Response":{},"Error":"","FullError":null}
Oct 10 15:47:33.240: INFO: Deleting pod "pvc-volume-tester-tckpr" in namespace "csi-mock-volumes-614"
Oct 10 15:47:33.407: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tckpr" to be fully deleted
Oct 10 15:47:35.200: INFO: >>> kubeConfig: /root/.kube/config
I1010 15:47:36.231083    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2b2677e7-e448-420c-a845-aa0079c6a143/volumes/kubernetes.io~csi/pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b/mount"},"Response":{},"Error":"","FullError":null}
I1010 15:47:36.427272    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1010 15:47:36.575077    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cc8ed269-b10c-4234-9def-0a1249ce7f3b/globalmount"},"Response":{},"Error":"","FullError":null}
I1010 15:47:37.854077    5462 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct 10 15:47:38.840: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-v6rjs", GenerateName:"pvc-", Namespace:"csi-mock-volumes-614", SelfLink:"", UID:"cc8ed269-b10c-4234-9def-0a1249ce7f3b", ResourceVersion:"6331", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769477638, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d01578), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d01590), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0012ad230), VolumeMode:(*v1.PersistentVolumeMode)(0xc0012ad240), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 10 15:47:38.840: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-v6rjs", GenerateName:"pvc-", Namespace:"csi-mock-volumes-614", SelfLink:"", UID:"cc8ed269-b10c-4234-9def-0a1249ce7f3b", ResourceVersion:"6335", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769477638, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-33-168.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0020383d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0020383f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002038408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002038420), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00162db80), VolumeMode:(*v1.PersistentVolumeMode)(0xc00162db90), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 10 15:47:38.840: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-v6rjs", GenerateName:"pvc-", Namespace:"csi-mock-volumes-614", SelfLink:"", UID:"cc8ed269-b10c-4234-9def-0a1249ce7f3b", ResourceVersion:"6336", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769477638, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-614", "volume.kubernetes.io/selected-node":"ip-172-20-33-168.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db40d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db40f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db4108), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db4120), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db4138), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db4150), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001fc4390), VolumeMode:(*v1.PersistentVolumeMode)(0xc001fc43a0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 10 15:47:38.840: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-v6rjs", GenerateName:"pvc-", Namespace:"csi-mock-volumes-614", SelfLink:"", UID:"cc8ed269-b10c-4234-9def-0a1249ce7f3b", ResourceVersion:"6342", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769477638, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-614"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e018), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e048), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e078), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0026e2020), VolumeMode:(*v1.PersistentVolumeMode)(0xc0026e2030), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 10 15:47:38.841: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-v6rjs", GenerateName:"pvc-", Namespace:"csi-mock-volumes-614", SelfLink:"", UID:"cc8ed269-b10c-4234-9def-0a1249ce7f3b", ResourceVersion:"6436", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769477638, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-614", "volume.kubernetes.io/selected-node":"ip-172-20-33-168.sa-east-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e0a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e0c0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e0d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e0f0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002b1e108), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002b1e120), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0026e2060), VolumeMode:(*v1.PersistentVolumeMode)(0xc0026e2070), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":6,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:36.045: INFO: Only supported for providers [gce gke] (not aws)
... skipping 112 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:18.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:39.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6698" for this suite.


• [SLOW TEST:21.294 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":8,"skipped":95,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:41.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6211" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":8,"skipped":34,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:42.084: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":31,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:31.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":5,"skipped":31,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 68 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:43.397: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:43.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:48:46.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7293" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:46.716: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct 10 15:48:43.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Oct 10 15:48:43.866: INFO: Waiting up to 5m0s for pod "downward-api-700efe32-26da-4e61-8a6e-77cf4981597f" in namespace "downward-api-2644" to be "Succeeded or Failed"
Oct 10 15:48:44.010: INFO: Pod "downward-api-700efe32-26da-4e61-8a6e-77cf4981597f": Phase="Pending", Reason="", readiness=false. Elapsed: 143.67905ms
Oct 10 15:48:46.155: INFO: Pod "downward-api-700efe32-26da-4e61-8a6e-77cf4981597f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288708085s
Oct 10 15:48:48.299: INFO: Pod "downward-api-700efe32-26da-4e61-8a6e-77cf4981597f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.433067356s
STEP: Saw pod success
Oct 10 15:48:48.299: INFO: Pod "downward-api-700efe32-26da-4e61-8a6e-77cf4981597f" satisfied condition "Succeeded or Failed"
Oct 10 15:48:48.444: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod downward-api-700efe32-26da-4e61-8a6e-77cf4981597f container dapi-container: <nil>
STEP: delete the pod
Oct 10 15:48:48.740: INFO: Waiting for pod downward-api-700efe32-26da-4e61-8a6e-77cf4981597f to disappear
Oct 10 15:48:48.882: INFO: Pod downward-api-700efe32-26da-4e61-8a6e-77cf4981597f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.168 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:49.182: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 155 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:50.084: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:48:47.594: INFO: Waiting up to 5m0s for pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04" in namespace "downward-api-3051" to be "Succeeded or Failed"
Oct 10 15:48:47.737: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 142.70085ms
Oct 10 15:48:49.880: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285943116s
Oct 10 15:48:52.025: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43034649s
Oct 10 15:48:54.168: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574121713s
Oct 10 15:48:56.312: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.717569108s
STEP: Saw pod success
Oct 10 15:48:56.312: INFO: Pod "downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04" satisfied condition "Succeeded or Failed"
Oct 10 15:48:56.458: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04 container client-container: <nil>
STEP: delete the pod
Oct 10 15:48:56.758: INFO: Waiting for pod downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04 to disappear
Oct 10 15:48:56.905: INFO: Pod downwardapi-volume-143094f8-9fc4-49fa-b4ba-cd6c4a8b1a04 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.459 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":57,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:145.294 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:48:58.273: INFO: Only supported for providers [azure] (not aws)
... skipping 205 lines ...
• [SLOW TEST:9.602 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":131,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":3,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:09.319: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Oct 10 15:48:28.792: INFO: PersistentVolumeClaim pvc-nld7z found but phase is Pending instead of Bound.
Oct 10 15:48:30.935: INFO: PersistentVolumeClaim pvc-nld7z found and phase=Bound (10.865377964s)
Oct 10 15:48:30.936: INFO: Waiting up to 3m0s for PersistentVolume local-9498k to have phase Bound
Oct 10 15:48:31.079: INFO: PersistentVolume local-9498k found and phase=Bound (143.664346ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xvnr
STEP: Creating a pod to test atomic-volume-subpath
Oct 10 15:48:31.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xvnr" in namespace "provisioning-6804" to be "Succeeded or Failed"
Oct 10 15:48:31.658: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Pending", Reason="", readiness=false. Elapsed: 144.208341ms
Oct 10 15:48:33.803: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289077914s
Oct 10 15:48:35.948: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433970189s
Oct 10 15:48:38.095: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58096934s
Oct 10 15:48:40.239: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 8.72531747s
Oct 10 15:48:42.388: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 10.873930547s
... skipping 3 lines ...
Oct 10 15:48:50.968: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 19.454347646s
Oct 10 15:48:53.113: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 21.59888662s
Oct 10 15:48:55.259: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 23.745421512s
Oct 10 15:48:57.404: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Running", Reason="", readiness=true. Elapsed: 25.890427372s
Oct 10 15:48:59.552: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.037654184s
STEP: Saw pod success
Oct 10 15:48:59.552: INFO: Pod "pod-subpath-test-preprovisionedpv-xvnr" satisfied condition "Succeeded or Failed"
Oct 10 15:48:59.695: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-xvnr container test-container-subpath-preprovisionedpv-xvnr: <nil>
STEP: delete the pod
Oct 10 15:48:59.992: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xvnr to disappear
Oct 10 15:49:00.136: INFO: Pod pod-subpath-test-preprovisionedpv-xvnr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xvnr
Oct 10 15:49:00.136: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xvnr" in namespace "provisioning-6804"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:02.166: INFO: Only supported for providers [openstack] (not aws)
... skipping 126 lines ...
• [SLOW TEST:39.714 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":9,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Oct 10 15:47:55.803: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7163l4krq
STEP: creating a claim
Oct 10 15:47:55.948: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-h6lt
STEP: Creating a pod to test subpath
Oct 10 15:47:56.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h6lt" in namespace "provisioning-7163" to be "Succeeded or Failed"
Oct 10 15:47:56.525: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 143.450431ms
Oct 10 15:47:58.670: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28817895s
Oct 10 15:48:00.813: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432009764s
Oct 10 15:48:02.975: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593374135s
Oct 10 15:48:05.121: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739850802s
Oct 10 15:48:07.267: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.885379621s
... skipping 16 lines ...
Oct 10 15:48:43.751: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 47.370007804s
Oct 10 15:48:45.896: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 49.514326113s
Oct 10 15:48:48.041: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 51.659954817s
Oct 10 15:48:50.185: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Pending", Reason="", readiness=false. Elapsed: 53.80393407s
Oct 10 15:48:52.329: INFO: Pod "pod-subpath-test-dynamicpv-h6lt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 55.947711956s
STEP: Saw pod success
Oct 10 15:48:52.329: INFO: Pod "pod-subpath-test-dynamicpv-h6lt" satisfied condition "Succeeded or Failed"
Oct 10 15:48:52.473: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-h6lt container test-container-subpath-dynamicpv-h6lt: <nil>
STEP: delete the pod
Oct 10 15:48:52.775: INFO: Waiting for pod pod-subpath-test-dynamicpv-h6lt to disappear
Oct 10 15:48:52.918: INFO: Pod pod-subpath-test-dynamicpv-h6lt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h6lt
Oct 10 15:48:52.918: INFO: Deleting pod "pod-subpath-test-dynamicpv-h6lt" in namespace "provisioning-7163"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:04.586: INFO: Only supported for providers [azure] (not aws)
... skipping 43 lines ...
STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 10 15:48:47.765: INFO: File wheezy_udp@dns-test-service-3.dns-9864.svc.cluster.local from pod  dns-9864/dns-test-b642c9ac-303e-4ac8-82ab-0ac190c98a8f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 10 15:48:47.918: INFO: Lookups using dns-9864/dns-test-b642c9ac-303e-4ac8-82ab-0ac190c98a8f failed for: [wheezy_udp@dns-test-service-3.dns-9864.svc.cluster.local]

Oct 10 15:48:53.208: INFO: DNS probes using dns-test-b642c9ac-303e-4ac8-82ab-0ac190c98a8f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9864.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9864.svc.cluster.local; sleep 1; done
... skipping 2 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 10 15:48:58.518: INFO: File wheezy_udp@dns-test-service-3.dns-9864.svc.cluster.local from pod  dns-9864/dns-test-fbd44e6b-7806-4082-9cb2-53b575e9fe58 contains '' instead of '100.64.4.133'
Oct 10 15:48:58.666: INFO: Lookups using dns-9864/dns-test-fbd44e6b-7806-4082-9cb2-53b575e9fe58 failed for: [wheezy_udp@dns-test-service-3.dns-9864.svc.cluster.local]

Oct 10 15:49:04.010: INFO: DNS probes using dns-test-fbd44e6b-7806-4082-9cb2-53b575e9fe58 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:55.972 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":102,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:04.832: INFO: Driver local doesn't support ext4 -- skipping
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":128,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:58.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":128,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 425 lines ...
• [SLOW TEST:14.627 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":6,"skipped":67,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:15.487 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 119 lines ...
Oct 10 15:49:04.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 10 15:49:05.250: INFO: Waiting up to 5m0s for pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7" in namespace "security-context-4265" to be "Succeeded or Failed"
Oct 10 15:49:05.395: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7": Phase="Pending", Reason="", readiness=false. Elapsed: 144.081148ms
Oct 10 15:49:07.539: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288350481s
Oct 10 15:49:09.684: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433191102s
Oct 10 15:49:11.832: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.581587612s
Oct 10 15:49:13.976: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725951878s
STEP: Saw pod success
Oct 10 15:49:13.977: INFO: Pod "security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7" satisfied condition "Succeeded or Failed"
Oct 10 15:49:14.120: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7 container test-container: <nil>
STEP: delete the pod
Oct 10 15:49:14.414: INFO: Waiting for pod security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7 to disappear
Oct 10 15:49:14.571: INFO: Pod security-context-1afeb3fe-47a9-42e5-874c-c151f6a578f7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.490 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":8,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:14.873: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":8,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:49:15.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-9678" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":9,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:13.318 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:57
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:12.160 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:19.176: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 138 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":5,"skipped":87,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:49:19.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:49:21.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1423" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":6,"skipped":87,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:49:22.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 10 15:49:23.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2" in namespace "projected-1677" to be "Succeeded or Failed"
Oct 10 15:49:23.178: INFO: Pod "downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 152.932322ms
Oct 10 15:49:25.323: INFO: Pod "downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297960389s
Oct 10 15:49:27.467: INFO: Pod "downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.442225964s
STEP: Saw pod success
Oct 10 15:49:27.467: INFO: Pod "downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2" satisfied condition "Succeeded or Failed"
Oct 10 15:49:27.611: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2 container client-container: <nil>
STEP: delete the pod
Oct 10 15:49:27.926: INFO: Waiting for pod downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2 to disappear
Oct 10 15:49:28.070: INFO: Pod downwardapi-volume-775418e2-37dc-46e9-991a-cc7278d9a9d2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.217 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:28.376: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Oct 10 15:49:18.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct 10 15:49:18.736: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 10 15:49:19.038: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1319" in namespace "provisioning-1319" to be "Succeeded or Failed"
Oct 10 15:49:19.182: INFO: Pod "hostpath-symlink-prep-provisioning-1319": Phase="Pending", Reason="", readiness=false. Elapsed: 143.972083ms
Oct 10 15:49:21.328: INFO: Pod "hostpath-symlink-prep-provisioning-1319": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.29079621s
STEP: Saw pod success
Oct 10 15:49:21.328: INFO: Pod "hostpath-symlink-prep-provisioning-1319" satisfied condition "Succeeded or Failed"
Oct 10 15:49:21.328: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1319" in namespace "provisioning-1319"
Oct 10 15:49:21.481: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1319" to be fully deleted
Oct 10 15:49:21.625: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8mfw
STEP: Creating a pod to test subpath
Oct 10 15:49:21.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8mfw" in namespace "provisioning-1319" to be "Succeeded or Failed"
Oct 10 15:49:21.919: INFO: Pod "pod-subpath-test-inlinevolume-8mfw": Phase="Pending", Reason="", readiness=false. Elapsed: 143.977764ms
Oct 10 15:49:24.064: INFO: Pod "pod-subpath-test-inlinevolume-8mfw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288844026s
Oct 10 15:49:26.220: INFO: Pod "pod-subpath-test-inlinevolume-8mfw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.445319485s
STEP: Saw pod success
Oct 10 15:49:26.221: INFO: Pod "pod-subpath-test-inlinevolume-8mfw" satisfied condition "Succeeded or Failed"
Oct 10 15:49:26.365: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-8mfw container test-container-volume-inlinevolume-8mfw: <nil>
STEP: delete the pod
Oct 10 15:49:26.673: INFO: Waiting for pod pod-subpath-test-inlinevolume-8mfw to disappear
Oct 10 15:49:26.817: INFO: Pod pod-subpath-test-inlinevolume-8mfw no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8mfw
Oct 10 15:49:26.817: INFO: Deleting pod "pod-subpath-test-inlinevolume-8mfw" in namespace "provisioning-1319"
STEP: Deleting pod
Oct 10 15:49:26.961: INFO: Deleting pod "pod-subpath-test-inlinevolume-8mfw" in namespace "provisioning-1319"
Oct 10 15:49:27.250: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1319" in namespace "provisioning-1319" to be "Succeeded or Failed"
Oct 10 15:49:27.393: INFO: Pod "hostpath-symlink-prep-provisioning-1319": Phase="Pending", Reason="", readiness=false. Elapsed: 143.638971ms
Oct 10 15:49:29.538: INFO: Pod "hostpath-symlink-prep-provisioning-1319": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288561876s
Oct 10 15:49:31.682: INFO: Pod "hostpath-symlink-prep-provisioning-1319": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.432492181s
STEP: Saw pod success
Oct 10 15:49:31.682: INFO: Pod "hostpath-symlink-prep-provisioning-1319" satisfied condition "Succeeded or Failed"
Oct 10 15:49:31.682: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1319" in namespace "provisioning-1319"
Oct 10 15:49:31.829: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1319" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:49:31.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1319" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:32.313: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 142 lines ...
Oct 10 15:48:52.138: INFO: PersistentVolumeClaim csi-hostpathk54z9 found but phase is Pending instead of Bound.
Oct 10 15:48:54.282: INFO: PersistentVolumeClaim csi-hostpathk54z9 found but phase is Pending instead of Bound.
Oct 10 15:48:56.426: INFO: PersistentVolumeClaim csi-hostpathk54z9 found but phase is Pending instead of Bound.
Oct 10 15:48:58.571: INFO: PersistentVolumeClaim csi-hostpathk54z9 found and phase=Bound (17.303542767s)
STEP: Creating pod pod-subpath-test-dynamicpv-p6kc
STEP: Creating a pod to test subpath
Oct 10 15:48:59.004: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-p6kc" in namespace "provisioning-7344" to be "Succeeded or Failed"
Oct 10 15:48:59.148: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 143.243377ms
Oct 10 15:49:01.292: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28746149s
Oct 10 15:49:03.436: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431985691s
Oct 10 15:49:05.584: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579701164s
Oct 10 15:49:07.732: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.727398193s
Oct 10 15:49:09.880: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875762272s
Oct 10 15:49:12.026: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.021292321s
Oct 10 15:49:14.170: INFO: Pod "pod-subpath-test-dynamicpv-p6kc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.165646504s
STEP: Saw pod success
Oct 10 15:49:14.170: INFO: Pod "pod-subpath-test-dynamicpv-p6kc" satisfied condition "Succeeded or Failed"
Oct 10 15:49:14.314: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-p6kc container test-container-volume-dynamicpv-p6kc: <nil>
STEP: delete the pod
Oct 10 15:49:14.636: INFO: Waiting for pod pod-subpath-test-dynamicpv-p6kc to disappear
Oct 10 15:49:14.780: INFO: Pod pod-subpath-test-dynamicpv-p6kc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-p6kc
Oct 10 15:49:14.780: INFO: Deleting pod "pod-subpath-test-dynamicpv-p6kc" in namespace "provisioning-7344"
... skipping 176 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":6,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:34.742: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
• [SLOW TEST:263.512 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted by liveness probe because startup probe delays it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:348
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:38.388: INFO: Only supported for providers [vsphere] (not aws)
... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:41.014: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
• [SLOW TEST:31.921 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":14,"skipped":134,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:42.678: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 99 lines ...
• [SLOW TEST:23.600 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:42.875: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:10.160 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":9,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:43.893: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
Oct 10 15:48:52.830: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jfv9x] to have phase Bound
Oct 10 15:48:52.973: INFO: PersistentVolumeClaim pvc-jfv9x found and phase=Bound (142.945883ms)
STEP: Deleting the previously created pod
Oct 10 15:49:13.697: INFO: Deleting pod "pvc-volume-tester-bnn4d" in namespace "csi-mock-volumes-7017"
Oct 10 15:49:13.843: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bnn4d" to be fully deleted
STEP: Checking CSI driver logs
Oct 10 15:49:18.283: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/93906c95-2f90-46a5-a0cf-b813bf000a36/volumes/kubernetes.io~csi/pvc-ed6216fc-74b3-4b3f-9951-82d47a370308/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-bnn4d
Oct 10 15:49:18.284: INFO: Deleting pod "pvc-volume-tester-bnn4d" in namespace "csi-mock-volumes-7017"
STEP: Deleting claim pvc-jfv9x
Oct 10 15:49:18.718: INFO: Waiting up to 2m0s for PersistentVolume pvc-ed6216fc-74b3-4b3f-9951-82d47a370308 to get deleted
Oct 10 15:49:18.867: INFO: PersistentVolume pvc-ed6216fc-74b3-4b3f-9951-82d47a370308 found and phase=Released (149.693387ms)
Oct 10 15:49:21.012: INFO: PersistentVolume pvc-ed6216fc-74b3-4b3f-9951-82d47a370308 found and phase=Released (2.294033686s)
... skipping 47 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when CSIDriver is not deployed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":5,"skipped":47,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Oct 10 15:49:45.601: INFO: Creating a PV followed by a PVC
Oct 10 15:49:45.925: INFO: Waiting for PV local-pvnclcd to bind to PVC pvc-qnhkb
Oct 10 15:49:45.925: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qnhkb] to have phase Bound
Oct 10 15:49:46.069: INFO: PersistentVolumeClaim pvc-qnhkb found and phase=Bound (143.786821ms)
Oct 10 15:49:46.069: INFO: Waiting up to 3m0s for PersistentVolume local-pvnclcd to have phase Bound
Oct 10 15:49:46.214: INFO: PersistentVolume local-pvnclcd found and phase=Bound (144.845265ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Oct 10 15:49:46.647: INFO: Waiting up to 5m0s for pod "pod-6cba6c44-c2a1-4a88-9e3e-793e46b0437e" in namespace "persistent-local-volumes-test-2429" to be "Unschedulable"
Oct 10 15:49:46.791: INFO: Pod "pod-6cba6c44-c2a1-4a88-9e3e-793e46b0437e": Phase="Pending", Reason="", readiness=false. Elapsed: 143.914951ms
Oct 10 15:49:46.791: INFO: Pod "pod-6cba6c44-c2a1-4a88-9e3e-793e46b0437e" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:7.810 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:51.025: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 165 lines ...
Oct 10 15:48:51.413: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathsjqg4] to have phase Bound
Oct 10 15:48:51.557: INFO: PersistentVolumeClaim csi-hostpathsjqg4 found but phase is Pending instead of Bound.
Oct 10 15:48:53.701: INFO: PersistentVolumeClaim csi-hostpathsjqg4 found but phase is Pending instead of Bound.
Oct 10 15:48:55.845: INFO: PersistentVolumeClaim csi-hostpathsjqg4 found and phase=Bound (4.431503586s)
STEP: Creating pod pod-subpath-test-dynamicpv-kfh4
STEP: Creating a pod to test subpath
Oct 10 15:48:56.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kfh4" in namespace "provisioning-2491" to be "Succeeded or Failed"
Oct 10 15:48:56.420: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 143.497335ms
Oct 10 15:48:58.564: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287836443s
Oct 10 15:49:00.709: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432820524s
Oct 10 15:49:02.856: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579797174s
Oct 10 15:49:05.004: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728315198s
Oct 10 15:49:07.156: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.880231929s
Oct 10 15:49:09.301: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.024567741s
Oct 10 15:49:11.459: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.182458483s
Oct 10 15:49:13.631: INFO: Pod "pod-subpath-test-dynamicpv-kfh4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.355341792s
STEP: Saw pod success
Oct 10 15:49:13.632: INFO: Pod "pod-subpath-test-dynamicpv-kfh4" satisfied condition "Succeeded or Failed"
Oct 10 15:49:13.779: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-dynamicpv-kfh4 container test-container-subpath-dynamicpv-kfh4: <nil>
STEP: delete the pod
Oct 10 15:49:14.089: INFO: Waiting for pod pod-subpath-test-dynamicpv-kfh4 to disappear
Oct 10 15:49:14.233: INFO: Pod pod-subpath-test-dynamicpv-kfh4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kfh4
Oct 10 15:49:14.233: INFO: Deleting pod "pod-subpath-test-dynamicpv-kfh4" in namespace "provisioning-2491"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:55.427: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 224 lines ...
• [SLOW TEST:12.253 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:57.079: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
• [SLOW TEST:10.489 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:59.394: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:49:59.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3086" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":7,"skipped":56,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:49:59.965: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 78 lines ...
• [SLOW TEST:16.493 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":10,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:00.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 13 lines ...
Oct 10 15:48:03.491: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-3279sdxgk      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-3279    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-3279sdxgk,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-3279    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-3279sdxgk,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-3279    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-3279sdxgk,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-3279sdxgk    d224952f-2b41-48be-a45c-e245eb71cdcf 8490 0 2021-10-10 15:48:03 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-10-10 15:48:03 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-j2kzh pvc- provisioning-3279  be6f307f-3032-4e99-9137-ea66cb0bed4a 8501 0 2021-10-10 15:48:04 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-10-10 15:48:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-3279sdxgk,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-001a8d7f-089d-497a-ac6d-c6811aace921 in namespace provisioning-3279
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct 10 15:48:25.082: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-g8gtd" in namespace "provisioning-3279" to be "Succeeded or Failed"
Oct 10 15:48:25.227: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 145.432532ms
Oct 10 15:48:27.372: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290134653s
Oct 10 15:48:29.516: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434302528s
Oct 10 15:48:31.661: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579620921s
Oct 10 15:48:33.805: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723371839s
Oct 10 15:48:35.949: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867236258s
... skipping 9 lines ...
Oct 10 15:48:57.404: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.321924019s
Oct 10 15:48:59.549: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.467815382s
Oct 10 15:49:01.693: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.611314891s
Oct 10 15:49:03.837: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.755805408s
Oct 10 15:49:05.985: INFO: Pod "pvc-volume-tester-writer-g8gtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.903470429s
STEP: Saw pod success
Oct 10 15:49:05.985: INFO: Pod "pvc-volume-tester-writer-g8gtd" satisfied condition "Succeeded or Failed"
Oct 10 15:49:06.347: INFO: Pod pvc-volume-tester-writer-g8gtd has the following logs: 
Oct 10 15:49:06.347: INFO: Deleting pod "pvc-volume-tester-writer-g8gtd" in namespace "provisioning-3279"
Oct 10 15:49:06.499: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-g8gtd" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-33-168.sa-east-1.compute.internal"
Oct 10 15:49:07.091: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-n2rqx" in namespace "provisioning-3279" to be "Succeeded or Failed"
Oct 10 15:49:07.249: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 158.01536ms
Oct 10 15:49:09.394: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303025391s
Oct 10 15:49:11.537: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.446795265s
Oct 10 15:49:13.683: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592558132s
Oct 10 15:49:15.828: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.73754239s
Oct 10 15:49:17.974: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883320704s
... skipping 2 lines ...
Oct 10 15:49:24.411: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.320126172s
Oct 10 15:49:26.556: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.465620868s
Oct 10 15:49:28.701: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.610261353s
Oct 10 15:49:30.845: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.75402799s
Oct 10 15:49:32.989: INFO: Pod "pvc-volume-tester-reader-n2rqx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.89832793s
STEP: Saw pod success
Oct 10 15:49:32.989: INFO: Pod "pvc-volume-tester-reader-n2rqx" satisfied condition "Succeeded or Failed"
Oct 10 15:49:33.286: INFO: Pod pvc-volume-tester-reader-n2rqx has the following logs: hello world

Oct 10 15:49:33.286: INFO: Deleting pod "pvc-volume-tester-reader-n2rqx" in namespace "provisioning-3279"
Oct 10 15:49:33.435: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-n2rqx" to be fully deleted
Oct 10 15:49:33.578: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-j2kzh] to have phase Bound
Oct 10 15:49:33.721: INFO: PersistentVolumeClaim pvc-j2kzh found and phase=Bound (143.090471ms)
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":8,"skipped":84,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":58,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:02.376: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":6,"skipped":72,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:04.850: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:49:33.976: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Oct 10 15:49:45.194: INFO: PersistentVolumeClaim pvc-pdhbt found but phase is Pending instead of Bound.
Oct 10 15:49:47.339: INFO: PersistentVolumeClaim pvc-pdhbt found and phase=Bound (8.722879227s)
Oct 10 15:49:47.339: INFO: Waiting up to 3m0s for PersistentVolume local-rt8mc to have phase Bound
Oct 10 15:49:47.483: INFO: PersistentVolume local-rt8mc found and phase=Bound (143.7277ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lv2w
STEP: Creating a pod to test subpath
Oct 10 15:49:47.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lv2w" in namespace "provisioning-6016" to be "Succeeded or Failed"
Oct 10 15:49:48.066: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 144.173917ms
Oct 10 15:49:50.211: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289053635s
Oct 10 15:49:52.356: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434324645s
Oct 10 15:49:54.502: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579907135s
Oct 10 15:49:56.651: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.729046268s
Oct 10 15:49:58.797: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.874798119s
STEP: Saw pod success
Oct 10 15:49:58.797: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w" satisfied condition "Succeeded or Failed"
Oct 10 15:49:58.941: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-lv2w container test-container-subpath-preprovisionedpv-lv2w: <nil>
STEP: delete the pod
Oct 10 15:49:59.248: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lv2w to disappear
Oct 10 15:49:59.392: INFO: Pod pod-subpath-test-preprovisionedpv-lv2w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lv2w
Oct 10 15:49:59.392: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lv2w" in namespace "provisioning-6016"
STEP: Creating pod pod-subpath-test-preprovisionedpv-lv2w
STEP: Creating a pod to test subpath
Oct 10 15:49:59.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lv2w" in namespace "provisioning-6016" to be "Succeeded or Failed"
Oct 10 15:49:59.826: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 144.115271ms
Oct 10 15:50:01.972: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289841294s
Oct 10 15:50:04.117: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.435373433s
STEP: Saw pod success
Oct 10 15:50:04.117: INFO: Pod "pod-subpath-test-preprovisionedpv-lv2w" satisfied condition "Succeeded or Failed"
Oct 10 15:50:04.261: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-lv2w container test-container-subpath-preprovisionedpv-lv2w: <nil>
STEP: delete the pod
Oct 10 15:50:04.562: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lv2w to disappear
Oct 10 15:50:04.708: INFO: Pod pod-subpath-test-preprovisionedpv-lv2w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lv2w
Oct 10 15:50:04.708: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lv2w" in namespace "provisioning-6016"
... skipping 117 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:50:00.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:11.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2076" for this suite.


• [SLOW TEST:11.443 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:227
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":11,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:11.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Oct 10 15:50:00.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 10 15:50:00.967: INFO: Waiting up to 5m0s for pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa" in namespace "emptydir-6282" to be "Succeeded or Failed"
Oct 10 15:50:01.111: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Pending", Reason="", readiness=false. Elapsed: 143.353076ms
Oct 10 15:50:03.260: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293222672s
Oct 10 15:50:05.415: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447571858s
Oct 10 15:50:07.563: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.595523519s
Oct 10 15:50:09.707: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739553472s
Oct 10 15:50:11.851: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.883570564s
STEP: Saw pod success
Oct 10 15:50:11.851: INFO: Pod "pod-d4261618-c011-4bfe-bfef-7018aec1fafa" satisfied condition "Succeeded or Failed"
Oct 10 15:50:11.994: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-d4261618-c011-4bfe-bfef-7018aec1fafa container test-container: <nil>
STEP: delete the pod
Oct 10 15:50:12.295: INFO: Waiting for pod pod-d4261618-c011-4bfe-bfef-7018aec1fafa to disappear
Oct 10 15:50:12.438: INFO: Pod pod-d4261618-c011-4bfe-bfef-7018aec1fafa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.677 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:12.738: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
• [SLOW TEST:13.031 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:480
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":9,"skipped":89,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Oct 10 15:49:50.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Oct 10 15:49:51.706: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 10 15:49:51.997: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4714" in namespace "provisioning-4714" to be "Succeeded or Failed"
Oct 10 15:49:52.141: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 142.977669ms
Oct 10 15:49:54.284: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286514018s
Oct 10 15:49:56.429: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431120971s
Oct 10 15:49:58.573: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57592178s
STEP: Saw pod success
Oct 10 15:49:58.574: INFO: Pod "hostpath-symlink-prep-provisioning-4714" satisfied condition "Succeeded or Failed"
Oct 10 15:49:58.574: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4714" in namespace "provisioning-4714"
Oct 10 15:49:58.723: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4714" to be fully deleted
Oct 10 15:49:58.869: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-725z
STEP: Creating a pod to test subpath
Oct 10 15:49:59.014: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-725z" in namespace "provisioning-4714" to be "Succeeded or Failed"
Oct 10 15:49:59.158: INFO: Pod "pod-subpath-test-inlinevolume-725z": Phase="Pending", Reason="", readiness=false. Elapsed: 143.633562ms
Oct 10 15:50:01.302: INFO: Pod "pod-subpath-test-inlinevolume-725z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287938203s
Oct 10 15:50:03.446: INFO: Pod "pod-subpath-test-inlinevolume-725z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431922738s
Oct 10 15:50:05.590: INFO: Pod "pod-subpath-test-inlinevolume-725z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575870927s
STEP: Saw pod success
Oct 10 15:50:05.590: INFO: Pod "pod-subpath-test-inlinevolume-725z" satisfied condition "Succeeded or Failed"
Oct 10 15:50:05.747: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-725z container test-container-subpath-inlinevolume-725z: <nil>
STEP: delete the pod
Oct 10 15:50:06.056: INFO: Waiting for pod pod-subpath-test-inlinevolume-725z to disappear
Oct 10 15:50:06.199: INFO: Pod pod-subpath-test-inlinevolume-725z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-725z
Oct 10 15:50:06.199: INFO: Deleting pod "pod-subpath-test-inlinevolume-725z" in namespace "provisioning-4714"
STEP: Deleting pod
Oct 10 15:50:06.342: INFO: Deleting pod "pod-subpath-test-inlinevolume-725z" in namespace "provisioning-4714"
Oct 10 15:50:06.632: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4714" in namespace "provisioning-4714" to be "Succeeded or Failed"
Oct 10 15:50:06.775: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 143.024798ms
Oct 10 15:50:08.919: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286487605s
Oct 10 15:50:11.062: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43027835s
Oct 10 15:50:13.207: INFO: Pod "hostpath-symlink-prep-provisioning-4714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57468598s
STEP: Saw pod success
Oct 10 15:50:13.207: INFO: Pod "hostpath-symlink-prep-provisioning-4714" satisfied condition "Succeeded or Failed"
Oct 10 15:50:13.207: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4714" in namespace "provisioning-4714"
Oct 10 15:50:13.354: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4714" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:13.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4714" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:13.798: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:48:28.587: INFO: >>> kubeConfig: /root/.kube/config
... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":27,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
• [SLOW TEST:11.601 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:16.497: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:67.610 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":9,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:21.507: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
• [SLOW TEST:49.925 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not create pods when created in suspend state
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:22.283: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 25 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 10 15:50:19.039: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4" in namespace "security-context-test-1343" to be "Succeeded or Failed"
Oct 10 15:50:19.183: INFO: Pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4": Phase="Pending", Reason="", readiness=false. Elapsed: 143.701077ms
Oct 10 15:50:21.330: INFO: Pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290922179s
Oct 10 15:50:23.488: INFO: Pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448948632s
Oct 10 15:50:25.633: INFO: Pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.593477832s
Oct 10 15:50:25.633: INFO: Pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4" satisfied condition "Succeeded or Failed"
Oct 10 15:50:25.790: INFO: Got logs for pod "busybox-privileged-false-f7b984d4-a578-4352-af07-cf8de899a0f4": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:25.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1343" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":10,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:50:09.111: INFO: >>> kubeConfig: /root/.kube/config
... skipping 11 lines ...
Oct 10 15:50:15.716: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s8bc5] to have phase Bound
Oct 10 15:50:15.859: INFO: PersistentVolumeClaim pvc-s8bc5 found and phase=Bound (143.478883ms)
Oct 10 15:50:15.860: INFO: Waiting up to 3m0s for PersistentVolume local-qndhz to have phase Bound
Oct 10 15:50:16.003: INFO: PersistentVolume local-qndhz found and phase=Bound (143.886285ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-l26k
STEP: Creating a pod to test subpath
Oct 10 15:50:16.439: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l26k" in namespace "provisioning-7347" to be "Succeeded or Failed"
Oct 10 15:50:16.583: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Pending", Reason="", readiness=false. Elapsed: 143.834898ms
Oct 10 15:50:18.734: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294960371s
Oct 10 15:50:20.880: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440264824s
Oct 10 15:50:23.027: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587454339s
Oct 10 15:50:25.171: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.732085104s
Oct 10 15:50:27.317: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.877154531s
STEP: Saw pod success
Oct 10 15:50:27.317: INFO: Pod "pod-subpath-test-preprovisionedpv-l26k" satisfied condition "Succeeded or Failed"
Oct 10 15:50:27.460: INFO: Trying to get logs from node ip-172-20-42-51.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-l26k container test-container-subpath-preprovisionedpv-l26k: <nil>
STEP: delete the pod
Oct 10 15:50:27.859: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l26k to disappear
Oct 10 15:50:28.003: INFO: Pod pod-subpath-test-preprovisionedpv-l26k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l26k
Oct 10 15:50:28.004: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l26k" in namespace "provisioning-7347"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:50:21.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-663c49b3-1e71-481c-a81d-7623d9e14533
STEP: Creating a pod to test consume configMaps
Oct 10 15:50:22.588: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c" in namespace "projected-4853" to be "Succeeded or Failed"
Oct 10 15:50:22.732: INFO: Pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 143.709115ms
Oct 10 15:50:24.875: INFO: Pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287198454s
Oct 10 15:50:27.019: INFO: Pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430945227s
Oct 10 15:50:29.163: INFO: Pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574866501s
STEP: Saw pod success
Oct 10 15:50:29.163: INFO: Pod "pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c" satisfied condition "Succeeded or Failed"
Oct 10 15:50:29.308: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c container agnhost-container: <nil>
STEP: delete the pod
Oct 10 15:50:29.606: INFO: Waiting for pod pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c to disappear
Oct 10 15:50:29.749: INFO: Pod pod-projected-configmaps-1bdae013-d763-4fb2-82fc-06ef8ee95f6c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.511 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":129,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:30.057: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 102 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 10 15:50:26.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-2a97c169-daca-4bc6-b94a-e79adb42aa09
STEP: Creating a pod to test consume secrets
Oct 10 15:50:27.097: INFO: Waiting up to 5m0s for pod "pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b" in namespace "secrets-1338" to be "Succeeded or Failed"
Oct 10 15:50:27.241: INFO: Pod "pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b": Phase="Pending", Reason="", readiness=false. Elapsed: 143.442411ms
Oct 10 15:50:29.385: INFO: Pod "pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.287112074s
STEP: Saw pod success
Oct 10 15:50:29.385: INFO: Pod "pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b" satisfied condition "Succeeded or Failed"
Oct 10 15:50:29.528: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b container secret-volume-test: <nil>
STEP: delete the pod
Oct 10 15:50:29.822: INFO: Waiting for pod pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b to disappear
Oct 10 15:50:29.965: INFO: Pod pod-secrets-d79e97a1-5cfc-4c13-a697-e9ee84ac522b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:29.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1338" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":10,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:32.476: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:33.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2166" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":11,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":12,"skipped":136,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:35.597: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:38.266: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
STEP: Deleting pod verify-service-up-exec-pod-lpdd9 in namespace services-3513
STEP: verifying service-headless is not up
Oct 10 15:49:54.345: INFO: Creating new host exec pod
Oct 10 15:49:54.637: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:49:56.796: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:49:58.781: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 10 15:49:58.781: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed'
Oct 10 15:50:02.368: INFO: rc: 28
Oct 10 15:50:02.368: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed" in pod services-3513/verify-service-down-host-exec-pod: error running /tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.70.179.94:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3513
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct 10 15:50:02.802: INFO: Creating new host exec pod
Oct 10 15:50:03.093: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:05.236: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:07.236: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:09.237: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:11.238: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:13.237: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 10 15:50:13.237: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.14.190:80 && echo service-down-failed'
Oct 10 15:50:16.737: INFO: rc: 28
Oct 10 15:50:16.737: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.14.190:80 && echo service-down-failed" in pod services-3513/verify-service-down-host-exec-pod: error running /tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.14.190:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.14.190:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3513
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct 10 15:50:17.188: INFO: Creating new host exec pod
... skipping 15 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3513
STEP: Deleting pod verify-service-up-exec-pod-k84hj in namespace services-3513
STEP: verifying service-headless is still not up
Oct 10 15:50:34.099: INFO: Creating new host exec pod
Oct 10 15:50:34.391: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 10 15:50:36.535: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 10 15:50:36.535: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed'
Oct 10 15:50:40.023: INFO: rc: 28
Oct 10 15:50:40.023: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed" in pod services-3513/verify-service-down-host-exec-pod: error running /tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3513 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.179.94:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.70.179.94:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3513
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:40.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:84.871 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1937
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":9,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:40.480: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
Oct 10 15:50:29.340: INFO: PersistentVolumeClaim pvc-9brfx found but phase is Pending instead of Bound.
Oct 10 15:50:31.484: INFO: PersistentVolumeClaim pvc-9brfx found and phase=Bound (10.861097588s)
Oct 10 15:50:31.484: INFO: Waiting up to 3m0s for PersistentVolume local-6nt4s to have phase Bound
Oct 10 15:50:31.627: INFO: PersistentVolume local-6nt4s found and phase=Bound (143.185689ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gwwf
STEP: Creating a pod to test subpath
Oct 10 15:50:32.061: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gwwf" in namespace "provisioning-5843" to be "Succeeded or Failed"
Oct 10 15:50:32.211: INFO: Pod "pod-subpath-test-preprovisionedpv-gwwf": Phase="Pending", Reason="", readiness=false. Elapsed: 150.312759ms
Oct 10 15:50:34.359: INFO: Pod "pod-subpath-test-preprovisionedpv-gwwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297824371s
Oct 10 15:50:36.506: INFO: Pod "pod-subpath-test-preprovisionedpv-gwwf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444745557s
Oct 10 15:50:38.657: INFO: Pod "pod-subpath-test-preprovisionedpv-gwwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.596671987s
STEP: Saw pod success
Oct 10 15:50:38.658: INFO: Pod "pod-subpath-test-preprovisionedpv-gwwf" satisfied condition "Succeeded or Failed"
Oct 10 15:50:38.800: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-gwwf container test-container-volume-preprovisionedpv-gwwf: <nil>
STEP: delete the pod
Oct 10 15:50:39.640: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gwwf to disappear
Oct 10 15:50:39.782: INFO: Pod pod-subpath-test-preprovisionedpv-gwwf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gwwf
Oct 10 15:50:39.782: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gwwf" in namespace "provisioning-5843"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":12,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
Oct 10 15:50:25.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:40.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9906" for this suite.
STEP: Destroying namespace "webhook-9906-markers" for this suite.
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":9,"skipped":81,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:41.801: INFO: Only supported for providers [gce gke] (not aws)
... skipping 127 lines ...
• [SLOW TEST:25.592 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":8,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":11,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:43.375: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
STEP: Destroying namespace "node-problem-detector-3754" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.020 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 49 lines ...
Oct 10 15:47:55.796: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4774 to register on node ip-172-20-33-168.sa-east-1.compute.internal
STEP: Creating pod
Oct 10 15:48:01.372: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct 10 15:48:01.517: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-d8fh4] to have phase Bound
Oct 10 15:48:01.661: INFO: PersistentVolumeClaim pvc-d8fh4 found and phase=Bound (143.145176ms)
STEP: checking for CSIInlineVolumes feature
Oct 10 15:48:10.679: INFO: Error getting logs for pod inline-volume-56z2q: the server rejected our request for an unknown reason (get pods inline-volume-56z2q)
Oct 10 15:48:10.966: INFO: Deleting pod "inline-volume-56z2q" in namespace "csi-mock-volumes-4774"
Oct 10 15:48:11.110: INFO: Wait up to 5m0s for pod "inline-volume-56z2q" to be fully deleted
STEP: Deleting the previously created pod
Oct 10 15:50:19.403: INFO: Deleting pod "pvc-volume-tester-pdvvn" in namespace "csi-mock-volumes-4774"
Oct 10 15:50:19.547: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pdvvn" to be fully deleted
STEP: Checking CSI driver logs
Oct 10 15:50:21.987: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-pdvvn
Oct 10 15:50:21.987: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4774
Oct 10 15:50:21.987: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 46eb81b3-4528-4ec7-91e0-c1ceec7a2ed5
Oct 10 15:50:21.987: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct 10 15:50:21.987: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Oct 10 15:50:21.987: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/46eb81b3-4528-4ec7-91e0-c1ceec7a2ed5/volumes/kubernetes.io~csi/pvc-d4d470a6-485c-4e19-97d9-d228b21c582c/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-pdvvn
Oct 10 15:50:21.987: INFO: Deleting pod "pvc-volume-tester-pdvvn" in namespace "csi-mock-volumes-4774"
STEP: Deleting claim pvc-d8fh4
Oct 10 15:50:22.461: INFO: Waiting up to 2m0s for PersistentVolume pvc-d4d470a6-485c-4e19-97d9-d228b21c582c to get deleted
Oct 10 15:50:22.604: INFO: PersistentVolume pvc-d4d470a6-485c-4e19-97d9-d228b21c582c found and phase=Released (143.012941ms)
Oct 10 15:50:24.757: INFO: PersistentVolume pvc-d4d470a6-485c-4e19-97d9-d228b21c582c found and phase=Released (2.296151511s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":11,"skipped":99,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:44.735: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:50:46.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5965" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":12,"skipped":112,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:46.763: INFO: Only supported for providers [gce gke] (not aws)
... skipping 41 lines ...
Oct 10 15:50:30.255: INFO: PersistentVolumeClaim pvc-lfvjl found but phase is Pending instead of Bound.
Oct 10 15:50:32.398: INFO: PersistentVolumeClaim pvc-lfvjl found and phase=Bound (2.287968528s)
Oct 10 15:50:32.398: INFO: Waiting up to 3m0s for PersistentVolume local-cnpbj to have phase Bound
Oct 10 15:50:32.542: INFO: PersistentVolume local-cnpbj found and phase=Bound (143.508151ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rw52
STEP: Creating a pod to test subpath
Oct 10 15:50:32.974: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rw52" in namespace "provisioning-2093" to be "Succeeded or Failed"
Oct 10 15:50:33.118: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52": Phase="Pending", Reason="", readiness=false. Elapsed: 143.703811ms
Oct 10 15:50:35.263: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289132755s
Oct 10 15:50:37.409: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434605242s
Oct 10 15:50:39.553: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578827442s
Oct 10 15:50:41.706: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.731561671s
STEP: Saw pod success
Oct 10 15:50:41.706: INFO: Pod "pod-subpath-test-preprovisionedpv-rw52" satisfied condition "Succeeded or Failed"
Oct 10 15:50:41.850: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-rw52 container test-container-subpath-preprovisionedpv-rw52: <nil>
STEP: delete the pod
Oct 10 15:50:42.147: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rw52 to disappear
Oct 10 15:50:42.296: INFO: Pod pod-subpath-test-preprovisionedpv-rw52 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rw52
Oct 10 15:50:42.296: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rw52" in namespace "provisioning-2093"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:65.652 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:377
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:48.548: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
STEP: Destroying namespace "apply-7214" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":13,"skipped":115,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:48.964: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 130 lines ...
• [SLOW TEST:44.417 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:57.244: INFO: Only supported for providers [gce gke] (not aws)
... skipping 76 lines ...
Oct 10 15:50:52.324: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 10 15:50:52.324: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8136 describe pod agnhost-primary-2pt2w'
Oct 10 15:50:53.150: INFO: stderr: ""
Oct 10 15:50:53.150: INFO: stdout: "Name:         agnhost-primary-2pt2w\nNamespace:    kubectl-8136\nPriority:     0\nNode:         ip-172-20-61-156.sa-east-1.compute.internal/172.20.61.156\nStart Time:   Sun, 10 Oct 2021 15:50:44 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.141\nIPs:\n  IP:           100.96.4.141\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   docker://a771e2eea103c8504d26086fd37319fa04be24e69795e61eb2227e9426d2b1be\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 10 Oct 2021 15:50:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnwqj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-fnwqj:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  9s    default-scheduler  Successfully assigned kubectl-8136/agnhost-primary-2pt2w to ip-172-20-61-156.sa-east-1.compute.internal\n  Normal  Pulled     7s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    7s    kubelet            Created container agnhost-primary\n  Normal  Started    7s    kubelet            Started container agnhost-primary\n"
Oct 10 15:50:53.151: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8136 describe rc agnhost-primary'
Oct 10 15:50:54.122: INFO: stderr: ""
Oct 10 15:50:54.122: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-8136\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: agnhost-primary-2pt2w\n"
Oct 10 15:50:54.122: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8136 describe service agnhost-primary'
Oct 10 15:50:55.068: INFO: stderr: ""
Oct 10 15:50:55.068: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-8136\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.66.221.99\nIPs:               100.66.221.99\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.141:6379\nSession Affinity:  None\nEvents:            <none>\n"
Oct 10 15:50:55.213: INFO: Running '/tmp/kubectl1123422819/kubectl --server=https://api.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8136 describe node ip-172-20-33-168.sa-east-1.compute.internal'
Oct 10 15:50:56.876: INFO: stderr: ""
Oct 10 15:50:56.876: INFO: stdout: "Name:               ip-172-20-33-168.sa-east-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=sa-east-1\n                    failure-domain.beta.kubernetes.io/zone=sa-east-1a\n                    io.kubernetes.storage.mock/node=some-mock-node\n                    kops.k8s.io/instancegroup=nodes-sa-east-1a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-33-168.sa-east-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.ebs.csi.aws.com/zone=sa-east-1a\n                    topology.hostpath.csi/node=ip-172-20-33-168.sa-east-1.compute.internal\n                    topology.kubernetes.io/region=sa-east-1\n                    topology.kubernetes.io/zone=sa-east-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"csi-mock-csi-mock-volumes-5498\":\"csi-mock-csi-mock-volumes-5498\",\"ebs.csi.aws.com\":\"i-01abff52cbaf0c001\"}\n                    io.cilium.network.ipv4-cilium-host: 100.96.1.160\n                    io.cilium.network.ipv4-health-ip: 100.96.1.253\n                    io.cilium.network.ipv4-pod-cidr: 100.96.1.0/24\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 10 Oct 2021 15:41:19 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-33-168.sa-east-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sun, 10 Oct 2021 15:50:48 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 10 Oct 2021 15:41:54 +0000   Sun, 10 Oct 2021 15:41:54 +0000   CiliumIsUp                   Cilium is running on this node\n  MemoryPressure       False   Sun, 10 Oct 2021 15:50:54 +0000   Sun, 10 Oct 2021 15:41:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 10 Oct 2021 15:50:54 +0000   Sun, 10 Oct 2021 15:41:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 10 Oct 2021 15:50:54 +0000   Sun, 10 Oct 2021 15:41:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 10 Oct 2021 15:50:54 +0000   Sun, 10 Oct 2021 15:41:49 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.33.168\n  ExternalIP:   52.67.140.145\n  Hostname:     ip-172-20-33-168.sa-east-1.compute.internal\n  InternalDNS:  ip-172-20-33-168.sa-east-1.compute.internal\n  ExternalDNS:  ec2-52-67-140-145.sa-east-1.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48725632Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3964584Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44905542377\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3862184Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 ec20f646bff7249061299f0b0185b95f\n  System UUID:                ec20f646-bff7-2490-6129-9f0b0185b95f\n  Boot ID:                    e1b6f579-5a07-41ae-b0d8-9d4e2d260b7a\n  Kernel Version:             5.11.0-1019-aws\n  OS Image:                   Ubuntu 20.04.3 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://20.10.9\n  Kubelet Version:            v1.22.2\n  Kube-Proxy Version:         v1.22.2\nPodCIDR:                      100.96.1.0/24\nPodCIDRs:                     100.96.1.0/24\nProviderID:                   aws:///sa-east-1a/i-01abff52cbaf0c001\nNon-terminated Pods:          (16 in total)\n  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---\n  apply-7214                  deployment-shared-map-item-removal-55649fd747-q84jx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\n  csi-mock-volumes-5498-644   csi-mockplugin-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s\n  csi-mock-volumes-5498-644   csi-mockplugin-attacher-0                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s\n  csi-mock-volumes-5498-644   csi-mockplugin-resizer-0                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s\n  csi-mock-volumes-5498       pvc-volume-tester-c5gzn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s\n  kube-system                 cilium-c4pw2                                           100m (5%)     0 (0%)      128Mi (3%)       100Mi (2%)     9m37s\n  kube-system                 coredns-5dc785954d-kbwrz                               100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m\n  kube-system                 coredns-autoscaler-84d4cfd89c-qjcr4                    20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         10m\n  kube-system                 ebs-csi-node-f5qpd                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s\n  kubectl-1423                agnhost-primary-kptf8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s\n  nettest-4386                netserver-0                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s\n  provisioning-6865           pod-subpath-test-dynamicpv-bh5n                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\n  services-1728               service-proxy-disabled-nv78m                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\n  statefulset-2594            ss2-1                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s\n  subpath-2509                pod-subpath-test-projected-cscb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  volume-1135                 aws-injector                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                220m (11%)  0 (0%)\n  memory             208Mi (5%)  270Mi (7%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type    Reason                   Age                  From     Message\n  ----    ------                   ----                 ----     -------\n  Normal  Starting                 10m                  kubelet  Starting kubelet.\n  Normal  NodeAllocatableEnforced  10m                  kubelet  Updated Node Allocatable limit across pods\n  Normal  NodeHasSufficientMemory  9m37s (x4 over 10m)  kubelet  Node ip-172-20-33-168.sa-east-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    9m37s (x4 over 10m)  kubelet  Node ip-172-20-33-168.sa-east-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     9m37s (x4 over 10m)  kubelet  Node ip-172-20-33-168.sa-east-1.compute.internal status is now: NodeHasSufficientPID\n  Normal  NodeReady                9m7s                 kubelet  Node ip-172-20-33-168.sa-east-1.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1094
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":13,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:58.114: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
Oct 10 15:50:30.665: INFO: PersistentVolumeClaim pvc-wcqqc found but phase is Pending instead of Bound.
Oct 10 15:50:32.809: INFO: PersistentVolumeClaim pvc-wcqqc found and phase=Bound (4.432359347s)
Oct 10 15:50:32.809: INFO: Waiting up to 3m0s for PersistentVolume local-gdkfx to have phase Bound
Oct 10 15:50:32.956: INFO: PersistentVolume local-gdkfx found and phase=Bound (146.439033ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gsdw
STEP: Creating a pod to test subpath
Oct 10 15:50:33.404: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gsdw" in namespace "provisioning-4222" to be "Succeeded or Failed"
Oct 10 15:50:33.560: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 156.150498ms
Oct 10 15:50:35.704: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300211701s
Oct 10 15:50:37.855: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451632572s
Oct 10 15:50:40.001: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.596831878s
Oct 10 15:50:42.147: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743355638s
Oct 10 15:50:44.293: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.88931726s
STEP: Saw pod success
Oct 10 15:50:44.293: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw" satisfied condition "Succeeded or Failed"
Oct 10 15:50:44.437: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-gsdw container test-container-subpath-preprovisionedpv-gsdw: <nil>
STEP: delete the pod
Oct 10 15:50:44.739: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gsdw to disappear
Oct 10 15:50:44.883: INFO: Pod pod-subpath-test-preprovisionedpv-gsdw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gsdw
Oct 10 15:50:44.883: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gsdw" in namespace "provisioning-4222"
STEP: Creating pod pod-subpath-test-preprovisionedpv-gsdw
STEP: Creating a pod to test subpath
Oct 10 15:50:45.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gsdw" in namespace "provisioning-4222" to be "Succeeded or Failed"
Oct 10 15:50:45.319: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 143.749988ms
Oct 10 15:50:47.464: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28816786s
Oct 10 15:50:49.608: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432097336s
Oct 10 15:50:51.753: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577610197s
Oct 10 15:50:53.898: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.722657298s
Oct 10 15:50:56.044: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.86797909s
STEP: Saw pod success
Oct 10 15:50:56.044: INFO: Pod "pod-subpath-test-preprovisionedpv-gsdw" satisfied condition "Succeeded or Failed"
Oct 10 15:50:56.187: INFO: Trying to get logs from node ip-172-20-54-137.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-gsdw container test-container-subpath-preprovisionedpv-gsdw: <nil>
STEP: delete the pod
Oct 10 15:50:56.482: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gsdw to disappear
Oct 10 15:50:56.626: INFO: Pod pod-subpath-test-preprovisionedpv-gsdw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gsdw
Oct 10 15:50:56.626: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gsdw" in namespace "provisioning-4222"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":92,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:58.688: INFO: Only supported for providers [gce gke] (not aws)
... skipping 106 lines ...
• [SLOW TEST:80.801 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:50:59.226: INFO: Driver local doesn't support ext4 -- skipping
... skipping 235 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":12,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:51:02.673: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
Oct 10 15:50:43.950: INFO: PersistentVolumeClaim pvc-zsnq2 found but phase is Pending instead of Bound.
Oct 10 15:50:46.097: INFO: PersistentVolumeClaim pvc-zsnq2 found and phase=Bound (10.871557578s)
Oct 10 15:50:46.097: INFO: Waiting up to 3m0s for PersistentVolume local-9wq86 to have phase Bound
Oct 10 15:50:46.241: INFO: PersistentVolume local-9wq86 found and phase=Bound (143.444896ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hpxs
STEP: Creating a pod to test subpath
Oct 10 15:50:46.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hpxs" in namespace "provisioning-7960" to be "Succeeded or Failed"
Oct 10 15:50:46.855: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 146.424551ms
Oct 10 15:50:48.999: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290327854s
Oct 10 15:50:51.143: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43402394s
Oct 10 15:50:53.288: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579402514s
Oct 10 15:50:55.433: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724068669s
Oct 10 15:50:57.578: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86850231s
Oct 10 15:50:59.722: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.012930871s
STEP: Saw pod success
Oct 10 15:50:59.722: INFO: Pod "pod-subpath-test-preprovisionedpv-hpxs" satisfied condition "Succeeded or Failed"
Oct 10 15:50:59.867: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-preprovisionedpv-hpxs container test-container-volume-preprovisionedpv-hpxs: <nil>
STEP: delete the pod
Oct 10 15:51:00.162: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hpxs to disappear
Oct 10 15:51:00.305: INFO: Pod pod-subpath-test-preprovisionedpv-hpxs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hpxs
Oct 10 15:51:00.305: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hpxs" in namespace "provisioning-7960"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":153,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-cscb
STEP: Creating a pod to test atomic-volume-subpath
Oct 10 15:50:39.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cscb" in namespace "subpath-2509" to be "Succeeded or Failed"
Oct 10 15:50:39.584: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Pending", Reason="", readiness=false. Elapsed: 144.299118ms
Oct 10 15:50:41.730: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290358372s
Oct 10 15:50:43.883: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 4.442788873s
Oct 10 15:50:46.030: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 6.589683818s
Oct 10 15:50:48.177: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 8.737209021s
Oct 10 15:50:50.322: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 10.881713566s
Oct 10 15:50:52.471: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 13.030916695s
Oct 10 15:50:54.616: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 15.17620906s
Oct 10 15:50:56.763: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 17.323222783s
Oct 10 15:50:58.908: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 19.468256634s
Oct 10 15:51:01.053: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Running", Reason="", readiness=true. Elapsed: 21.612808206s
Oct 10 15:51:03.202: INFO: Pod "pod-subpath-test-projected-cscb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.761777805s
STEP: Saw pod success
Oct 10 15:51:03.202: INFO: Pod "pod-subpath-test-projected-cscb" satisfied condition "Succeeded or Failed"
Oct 10 15:51:03.346: INFO: Trying to get logs from node ip-172-20-33-168.sa-east-1.compute.internal pod pod-subpath-test-projected-cscb container test-container-subpath-projected-cscb: <nil>
STEP: delete the pod
Oct 10 15:51:03.653: INFO: Waiting for pod pod-subpath-test-projected-cscb to disappear
Oct 10 15:51:03.799: INFO: Pod pod-subpath-test-projected-cscb no longer exists
STEP: Deleting pod pod-subpath-test-projected-cscb
Oct 10 15:51:03.799: INFO: Deleting pod "pod-subpath-test-projected-cscb" in namespace "subpath-2509"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":9,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Oct 10 15:50:40.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Oct 10 15:50:41.268: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 10 15:50:41.559: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2127" in namespace "provisioning-2127" to be "Succeeded or Failed"
Oct 10 15:50:41.709: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 150.164522ms
Oct 10 15:50:43.854: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294787023s
Oct 10 15:50:46.000: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441430687s
Oct 10 15:50:48.148: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.588956432s
STEP: Saw pod success
Oct 10 15:50:48.148: INFO: Pod "hostpath-symlink-prep-provisioning-2127" satisfied condition "Succeeded or Failed"
Oct 10 15:50:48.148: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2127" in namespace "provisioning-2127"
Oct 10 15:50:48.304: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2127" to be fully deleted
Oct 10 15:50:48.447: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fdn7
STEP: Creating a pod to test subpath
Oct 10 15:50:48.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fdn7" in namespace "provisioning-2127" to be "Succeeded or Failed"
Oct 10 15:50:48.735: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.539516ms
Oct 10 15:50:50.878: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286738339s
Oct 10 15:50:53.022: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431243349s
Oct 10 15:50:55.167: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575531342s
Oct 10 15:50:57.311: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719717924s
Oct 10 15:50:59.454: INFO: Pod "pod-subpath-test-inlinevolume-fdn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.862758879s
STEP: Saw pod success
Oct 10 15:50:59.454: INFO: Pod "pod-subpath-test-inlinevolume-fdn7" satisfied condition "Succeeded or Failed"
Oct 10 15:50:59.597: INFO: Trying to get logs from node ip-172-20-61-156.sa-east-1.compute.internal pod pod-subpath-test-inlinevolume-fdn7 container test-container-subpath-inlinevolume-fdn7: <nil>
STEP: delete the pod
Oct 10 15:50:59.891: INFO: Waiting for pod pod-subpath-test-inlinevolume-fdn7 to disappear
Oct 10 15:51:00.033: INFO: Pod pod-subpath-test-inlinevolume-fdn7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fdn7
Oct 10 15:51:00.033: INFO: Deleting pod "pod-subpath-test-inlinevolume-fdn7" in namespace "provisioning-2127"
STEP: Deleting pod
Oct 10 15:51:00.176: INFO: Deleting pod "pod-subpath-test-inlinevolume-fdn7" in namespace "provisioning-2127"
Oct 10 15:51:00.462: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2127" in namespace "provisioning-2127" to be "Succeeded or Failed"
Oct 10 15:51:00.605: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 142.637059ms
Oct 10 15:51:02.749: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28685525s
Oct 10 15:51:04.893: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430370748s
Oct 10 15:51:07.037: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574806385s
Oct 10 15:51:09.180: INFO: Pod "hostpath-symlink-prep-provisioning-2127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.718030573s
STEP: Saw pod success
Oct 10 15:51:09.181: INFO: Pod "hostpath-symlink-prep-provisioning-2127" satisfied condition "Succeeded or Failed"
Oct 10 15:51:09.181: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2127" in namespace "provisioning-2127"
Oct 10 15:51:09.332: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2127" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 10 15:51:09.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2127" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":119,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 10 15:51:09.787: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42988 lines ...






eated by system administrator\"\nI1010 15:48:52.778359       1 pv_controller.go:879] volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" entered phase \"Bound\"\nI1010 15:48:52.778455       1 pv_controller.go:982] volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" bound to claim \"csi-mock-volumes-7017/pvc-jfv9x\"\nI1010 15:48:52.785411       1 pv_controller.go:823] claim \"csi-mock-volumes-7017/pvc-jfv9x\" entered phase \"Bound\"\nE1010 15:48:53.169491       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:48:53.292467       1 garbagecollector.go:471] \"Processing object\" object=\"dns-9864/dns-test-b642c9ac-303e-4ac8-82ab-0ac190c98a8f\" objectUID=cc096b08-c915-42cd-865b-09414929428d kind=\"CiliumEndpoint\" virtual=false\nI1010 15:48:53.307398       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-9864/dns-test-b642c9ac-303e-4ac8-82ab-0ac190c98a8f\" objectUID=cc096b08-c915-42cd-865b-09414929428d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:48:53.413919       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7017^4\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:48:53.478877       1 namespace_controller.go:185] Namespace has been deleted dns-2032\nI1010 15:48:53.553600       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-5081\nI1010 15:48:53.571045       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7163/awshrpxr\"\nI1010 15:48:53.577090       1 pv_controller.go:640] volume \"pvc-db01220a-5fda-4e4f-a9fe-12638d497300\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:48:53.581395       1 pv_controller.go:879] volume \"pvc-db01220a-5fda-4e4f-a9fe-12638d497300\" entered phase \"Released\"\nI1010 15:48:53.584984       1 pv_controller.go:1340] isVolumeReleased[pvc-db01220a-5fda-4e4f-a9fe-12638d497300]: volume is released\nI1010 15:48:53.970197       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7017^4\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:48:53.970501       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7017/pvc-volume-tester-bnn4d\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\\\" \"\nI1010 15:48:53.972105       1 pv_controller.go:879] volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" entered phase \"Bound\"\nI1010 15:48:53.972132       1 pv_controller.go:982] volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" bound to claim \"provisioning-2491/csi-hostpathsjqg4\"\nI1010 15:48:53.982232       1 pv_controller.go:823] claim \"provisioning-2491/csi-hostpathsjqg4\" entered phase \"Bound\"\nI1010 15:48:54.027818       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-db01220a-5fda-4e4f-a9fe-12638d497300\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01026555d60deccd3\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:48:54.030095       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-db01220a-5fda-4e4f-a9fe-12638d497300\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01026555d60deccd3\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:48:55.118454       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-7057/default: secrets \"default-token-f29sx\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-7057 because it is being terminated\nI1010 15:48:55.139236       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:48:55.139463       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\\\" \"\nI1010 15:48:55.295158       1 garbagecollector.go:471] \"Processing object\" object=\"services-3240/pod1\" objectUID=6db002bb-0950-4012-8338-488980903668 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:48:55.316850       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3240/pod1\" objectUID=6db002bb-0950-4012-8338-488980903668 kind=\"CiliumEndpoint\" propagationPolicy=Background\nW1010 15:48:55.323750       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-3240/endpoint-test2\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:48:55.327853       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"services-3240/endpoint-test2\" err=\"Operation cannot be fulfilled on endpoints \\\"endpoint-test2\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:48:55.328073       1 event.go:291] \"Event occurred\" object=\"services-3240/endpoint-test2\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-3240/endpoint-test2: Operation cannot be fulfilled on endpoints \\\"endpoint-test2\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1010 15:48:55.516384       1 namespace_controller.go:162] deletion of namespace cronjob-2435 failed: unexpected items still remain in namespace: cronjob-2435 for gvr: /v1, Resource=pods\nI1010 15:48:55.571595       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498-644/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1010 15:48:55.707635       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1010 15:48:55.783019       1 pv_controller.go:879] volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" entered phase \"Bound\"\nI1010 15:48:55.783067       1 pv_controller.go:982] volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" bound to claim \"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:48:55.800815       1 pv_controller.go:823] claim \"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\" entered phase \"Bound\"\nI1010 15:48:55.851643       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI1010 15:48:56.034964       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3276/pod-b0683248-f0e7-45e9-aefa-91bb95cd164f\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:56.034993       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:56.272633       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-2491^923e37b0-29e1-11ec-b2aa-4ef292621196\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:48:56.329259       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6638/pod-743699f9-9784-4d65-9ee0-73bbafa79bea\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:56.329446       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:56.528237       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3276/pod-b0683248-f0e7-45e9-aefa-91bb95cd164f\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:56.528262       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:56.801247       1 stateful_set_control.go:555] StatefulSet statefulset-1274/ss2 terminating Pod ss2-1 for update\nI1010 15:48:56.807505       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI1010 15:48:56.830691       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-2491^923e37b0-29e1-11ec-b2aa-4ef292621196\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:48:56.831032       1 event.go:291] \"Event occurred\" object=\"provisioning-2491/pod-subpath-test-dynamicpv-kfh4\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\\\" \"\nI1010 15:48:57.188876       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3276/pod-b0683248-f0e7-45e9-aefa-91bb95cd164f\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:57.189306       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:57.194176       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3276/pvc-4lmfq\"\nI1010 15:48:57.199271       1 pv_controller.go:640] volume \"local-pvsdhb4\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:48:57.202506       1 pv_controller.go:879] volume \"local-pvsdhb4\" entered phase \"Released\"\nI1010 15:48:57.206671       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-3276/pvc-4lmfq\" was already processed\nI1010 15:48:57.656292       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6638/pod-743699f9-9784-4d65-9ee0-73bbafa79bea\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:57.656317       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:57.789307       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6638/pod-743699f9-9784-4d65-9ee0-73bbafa79bea\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:57.789561       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:57.794736       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6638/pvc-cks27\"\nI1010 15:48:57.800592       1 pv_controller.go:640] volume \"local-pv27cht\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:48:57.806393       1 pv_controller.go:879] volume \"local-pv27cht\" entered phase \"Released\"\nI1010 15:48:57.809406       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-6638/pvc-cks27\" was already processed\nI1010 15:48:57.886632       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-362/liveness-a386ef50-b8c3-4393-9269-0afba532b828\" objectUID=43c32cde-20e3-441b-b7f2-f96e8cd19d41 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:48:57.889296       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-362/liveness-a386ef50-b8c3-4393-9269-0afba532b828\" objectUID=43c32cde-20e3-441b-b7f2-f96e8cd19d41 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:48:57.929721       1 pv_controller.go:879] volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" entered phase \"Bound\"\nI1010 15:48:57.929763       1 pv_controller.go:982] volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" bound to claim \"provisioning-7344/csi-hostpathk54z9\"\nI1010 15:48:57.938701       1 pv_controller.go:823] claim \"provisioning-7344/csi-hostpathk54z9\" entered phase \"Bound\"\nI1010 15:48:58.220905       1 namespace_controller.go:185] Namespace has been deleted security-context-test-7833\nI1010 15:48:58.705995       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-883^93523a50-29e1-11ec-a7ef-f24f52f035cd\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:48:59.008413       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-7344^949aa773-29e1-11ec-ad79-962e61cc7e94\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:48:59.141290       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"svc-latency-3888/svc-latency-rc\" need=1 creating=1\nI1010 15:48:59.145802       1 event.go:291] \"Event occurred\" object=\"svc-latency-3888/svc-latency-rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-n2wwg\"\nI1010 15:48:59.243621       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-883^93523a50-29e1-11ec-a7ef-f24f52f035cd\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:48:59.243949       1 event.go:291] \"Event occurred\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\\\" \"\nI1010 15:48:59.255182       1 namespace_controller.go:185] Namespace has been deleted downward-api-2644\nI1010 15:48:59.573265       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-7344^949aa773-29e1-11ec-ad79-962e61cc7e94\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:48:59.573370       1 event.go:291] \"Event occurred\" object=\"provisioning-7344/pod-subpath-test-dynamicpv-p6kc\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-4239eb73-075d-4989-993e-d59d72c02aef\\\" \"\nI1010 15:48:59.941091       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-9905/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1010 15:48:59.941840       1 event.go:291] \"Event occurred\" object=\"webhook-9905/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1010 15:48:59.949143       1 event.go:291] \"Event occurred\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-bnvwj\"\nI1010 15:48:59.962170       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9905/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:49:00.205558       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7057\nI1010 15:49:00.500793       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6804/pvc-nld7z\"\nI1010 15:49:00.507148       1 pv_controller.go:640] volume \"local-9498k\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:49:00.509926       1 pv_controller.go:879] volume \"local-9498k\" entered phase \"Released\"\nI1010 15:49:00.647210       1 pv_controller_base.go:505] deletion of claim \"provisioning-6804/pvc-nld7z\" was already processed\nI1010 15:49:00.745107       1 event.go:291] \"Event occurred\" object=\"volume-2649/csi-hostpath4jjrh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-2649\\\" or manually created by system administrator\"\nI1010 15:49:00.751743       1 pv_controller.go:1340] isVolumeReleased[pvc-db01220a-5fda-4e4f-a9fe-12638d497300]: volume is released\nI1010 15:49:00.806018       1 pv_controller.go:1340] isVolumeReleased[pvc-db01220a-5fda-4e4f-a9fe-12638d497300]: volume is released\nI1010 15:49:00.809331       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-8660/awss6pqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:49:00.904787       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-db01220a-5fda-4e4f-a9fe-12638d497300\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01026555d60deccd3\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:00.967181       1 pv_controller_base.go:505] deletion of claim \"provisioning-7163/awshrpxr\" was already processed\nI1010 15:49:01.105509       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-8660/awss6pqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:49:01.105753       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-8660/awss6pqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nW1010 15:49:01.607123       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-1274/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:49:01.608279       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI1010 15:49:01.893455       1 garbagecollector.go:471] \"Processing object\" object=\"services-3240/pod2\" objectUID=6ba8fcfa-57a7-406f-866b-5c546bdafa08 kind=\"CiliumEndpoint\" virtual=false\nW1010 15:49:01.900008       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-3240/endpoint-test2\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:49:01.901334       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3240/pod2\" objectUID=6ba8fcfa-57a7-406f-866b-5c546bdafa08 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:01.946918       1 namespace_controller.go:185] Namespace has been deleted kubectl-7293\nI1010 15:49:02.736701       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1624/httpd\" objectUID=317d3f8d-56a0-41d6-90a4-793060a5d5aa kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:02.742469       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-1624/httpd\" objectUID=317d3f8d-56a0-41d6-90a4-793060a5d5aa kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:49:02.947865       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-6638/default: secrets \"default-token-t7qfz\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-6638 because it is being terminated\nI1010 15:49:02.991044       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7782/pvc-2hmdg\"\nI1010 15:49:02.997700       1 pv_controller.go:640] volume \"local-w4mrh\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:49:03.003052       1 pv_controller.go:879] volume \"local-w4mrh\" entered phase \"Released\"\nI1010 15:49:03.134826       1 pv_controller_base.go:505] deletion of claim \"provisioning-7782/pvc-2hmdg\" was already processed\nE1010 15:49:03.712477       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-3276/default: secrets \"default-token-9p9w5\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3276 because it is being terminated\nI1010 15:49:03.898468       1 garbagecollector.go:471] \"Processing object\" object=\"services-3240/endpoint-test2-ssz4r\" objectUID=3773fc53-6f8f-4f18-b744-7b73b4d67b0c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:03.901887       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3240/endpoint-test2-ssz4r\" objectUID=3773fc53-6f8f-4f18-b744-7b73b4d67b0c kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:04.138288       1 garbagecollector.go:471] \"Processing object\" object=\"dns-9864/dns-test-fbd44e6b-7806-4082-9cb2-53b575e9fe58\" objectUID=0befb752-245f-4089-880d-382b3d745497 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:04.162616       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-9864/dns-test-fbd44e6b-7806-4082-9cb2-53b575e9fe58\" objectUID=0befb752-245f-4089-880d-382b3d745497 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:04.627864       1 pv_controller.go:879] volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" entered phase \"Bound\"\nI1010 15:49:04.627945       1 pv_controller.go:982] volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" bound to claim \"fsgroupchangepolicy-8660/awss6pqk\"\nI1010 15:49:04.647615       1 pv_controller.go:823] claim \"fsgroupchangepolicy-8660/awss6pqk\" entered phase \"Bound\"\nE1010 15:49:04.832159       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5714/default: secrets \"default-token-h8hk6\" is forbidden: unable to create new content in namespace projected-5714 because it is being terminated\nE1010 15:49:04.926476       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:49:05.168580       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nE1010 15:49:05.365749       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-7457/inline-volume-9rpwm-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1010 15:49:05.366030       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-9rpwm-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1010 15:49:05.786568       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7457, name: inline-volume-9rpwm, uid: 9d25f88d-96b4-42db-a042-5d4674a17162] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1010 15:49:05.786822       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-9rpwm-my-volume\" objectUID=2c23e571-64fc-42fa-b86a-f202897f2302 kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:05.787343       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-9rpwm\" objectUID=9d25f88d-96b4-42db-a042-5d4674a17162 kind=\"Pod\" virtual=false\nI1010 15:49:05.791707       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7457, name: inline-volume-9rpwm-my-volume, uid: 2c23e571-64fc-42fa-b86a-f202897f2302] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7457, name: inline-volume-9rpwm, uid: 9d25f88d-96b4-42db-a042-5d4674a17162] is deletingDependents\nI1010 15:49:05.794090       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7457/inline-volume-9rpwm-my-volume\" objectUID=2c23e571-64fc-42fa-b86a-f202897f2302 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE1010 15:49:05.803994       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-7457/inline-volume-9rpwm-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1010 15:49:05.804334       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-9rpwm-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1010 15:49:05.804784       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-9rpwm-my-volume\" objectUID=2c23e571-64fc-42fa-b86a-f202897f2302 kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:05.809614       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-7457/inline-volume-9rpwm-my-volume\"\nI1010 15:49:05.819193       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-9rpwm\" objectUID=9d25f88d-96b4-42db-a042-5d4674a17162 kind=\"Pod\" virtual=false\nI1010 15:49:05.821729       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7457, name: inline-volume-9rpwm, uid: 9d25f88d-96b4-42db-a042-5d4674a17162]\nE1010 15:49:05.919480       1 namespace_controller.go:162] deletion of namespace cronjob-2435 failed: unexpected items still remain in namespace: cronjob-2435 for gvr: /v1, Resource=pods\nI1010 15:49:06.020829       1 stateful_set_control.go:555] StatefulSet statefulset-1274/ss2 terminating Pod ss2-0 for update\nI1010 15:49:06.025436       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:49:06.230542       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498/pvc-mxl2l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5498\\\" or manually created by system administrator\"\nI1010 15:49:06.246828       1 pv_controller.go:879] volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" entered phase \"Bound\"\nI1010 15:49:06.246858       1 pv_controller.go:982] volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" bound to claim \"csi-mock-volumes-5498/pvc-mxl2l\"\nI1010 15:49:06.254657       1 pv_controller.go:823] claim \"csi-mock-volumes-5498/pvc-mxl2l\" entered phase \"Bound\"\nI1010 15:49:06.880369       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5498^4\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:07.179943       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:49:07.251512       1 namespace_controller.go:185] Namespace has been deleted downward-api-3051\nI1010 15:49:07.448153       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5498^4\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:07.448284       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498/pvc-volume-tester-c5gzn\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\\\" \"\nI1010 15:49:07.562024       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:49:07.563116       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-8660/pod-3e251134-9975-49bc-8fb8-a8f7bdcd09f3\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\\\" \"\nI1010 15:49:08.083472       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6638\nI1010 15:49:08.331641       1 namespace_controller.go:185] Namespace has been deleted container-probe-362\nI1010 15:49:08.742257       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3276\nE1010 15:49:09.524017       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-1624/default: secrets \"default-token-d7lch\" is forbidden: unable to create new content in namespace kubectl-1624 because it is being terminated\nI1010 15:49:11.015962       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-4287/pvc-4tkmr\"\nI1010 15:49:11.026029       1 pv_controller.go:640] volume \"pvc-e33ac9e9-34f1-4311-925f-ba1720ea86c4\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:11.029089       1 pv_controller.go:879] volume \"pvc-e33ac9e9-34f1-4311-925f-ba1720ea86c4\" entered phase \"Released\"\nI1010 15:49:11.030639       1 pv_controller.go:1340] isVolumeReleased[pvc-e33ac9e9-34f1-4311-925f-ba1720ea86c4]: volume is released\nI1010 15:49:11.044018       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-4287/pvc-4tkmr\" was already processed\nI1010 15:49:11.553591       1 pv_controller.go:879] volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" entered phase \"Bound\"\nI1010 15:49:11.553634       1 pv_controller.go:982] volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" bound to claim \"volume-2649/csi-hostpath4jjrh\"\nI1010 15:49:11.568476       1 pv_controller.go:823] claim \"volume-2649/csi-hostpath4jjrh\" entered phase \"Bound\"\nI1010 15:49:12.294192       1 namespace_controller.go:185] Namespace has been deleted provisioning-6804\nI1010 15:49:12.491404       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-6060/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1010 15:49:12.492033       1 event.go:291] \"Event occurred\" object=\"webhook-6060/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1010 15:49:12.500554       1 event.go:291] \"Event occurred\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-4lsbc\"\nI1010 15:49:12.502000       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-6060/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1010 15:49:12.605659       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:49:12.809720       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457-8822/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1010 15:49:12.990047       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9905/e2e-test-webhook-xwvrw\" objectUID=3889dddf-7f67-47b4-913e-6c3fc4e2148d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:12.993856       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9905/e2e-test-webhook-xwvrw\" objectUID=3889dddf-7f67-47b4-913e-6c3fc4e2148d kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:13.146001       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd\" objectUID=d624c37a-f84b-44c7-9e76-c1655e3d2a71 kind=\"ReplicaSet\" virtual=false\nI1010 15:49:13.146275       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-9905/sample-webhook-deployment\"\nI1010 15:49:13.147859       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd\" objectUID=d624c37a-f84b-44c7-9e76-c1655e3d2a71 kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:49:13.152495       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd-bnvwj\" objectUID=6a6fac97-fb07-4906-b356-e215e982da7b kind=\"Pod\" virtual=false\nI1010 15:49:13.153943       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd-bnvwj\" objectUID=6a6fac97-fb07-4906-b356-e215e982da7b kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:13.161362       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd-bnvwj\" objectUID=6b172401-d79e-4fe0-bb4c-1b2e71bb2d04 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:13.165276       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9905/sample-webhook-deployment-78988fc6cd-bnvwj\" objectUID=6b172401-d79e-4fe0-bb4c-1b2e71bb2d04 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:13.210073       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-sr2sl to be scheduled\"\nI1010 15:49:13.498946       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 5efa5fca-79f3-4c65-b138-81e224af1bf9] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1010 15:49:13.499038       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\" objectUID=748e818d-25c5-4913-92c6-e12947d7f38b kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:13.499504       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=43cd7321-8022-4a6c-91ca-ca64298e2f5a kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:13.499758       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=5efa5fca-79f3-4c65-b138-81e224af1bf9 kind=\"Pod\" virtual=false\nI1010 15:49:13.505582       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-883, name: inline-volume-tester-7xkjn-my-volume-0, uid: 748e818d-25c5-4913-92c6-e12947d7f38b] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 5efa5fca-79f3-4c65-b138-81e224af1bf9] is deletingDependents\nI1010 15:49:13.505605       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 43cd7321-8022-4a6c-91ca-ca64298e2f5a] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 5efa5fca-79f3-4c65-b138-81e224af1bf9] is deletingDependents\nI1010 15:49:13.507775       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\" objectUID=748e818d-25c5-4913-92c6-e12947d7f38b kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI1010 15:49:13.508072       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=43cd7321-8022-4a6c-91ca-ca64298e2f5a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:13.514232       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=5efa5fca-79f3-4c65-b138-81e224af1bf9 kind=\"Pod\" virtual=false\nI1010 15:49:13.515403       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=43cd7321-8022-4a6c-91ca-ca64298e2f5a kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:13.519309       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-883, name: inline-volume-tester-7xkjn-my-volume-0, uid: 748e818d-25c5-4913-92c6-e12947d7f38b] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 5efa5fca-79f3-4c65-b138-81e224af1bf9] is deletingDependents\nI1010 15:49:13.539107       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-883/inline-volume-tester-7xkjn\" PVC=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:49:13.539128       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:49:13.539380       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\" objectUID=748e818d-25c5-4913-92c6-e12947d7f38b kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:13.660209       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-2649^9cb7f3bc-29e1-11ec-9237-7eb4be312204\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:14.248304       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-2649^9cb7f3bc-29e1-11ec-9237-7eb4be312204\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:14.248495       1 event.go:291] \"Event occurred\" object=\"volume-2649/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\\\" \"\nI1010 15:49:14.481063       1 namespace_controller.go:185] Namespace has been deleted services-3240\nI1010 15:49:14.509022       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/awsvqlkh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:49:14.599484       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2491/csi-hostpathsjqg4\"\nI1010 15:49:14.618274       1 pv_controller.go:640] volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:14.621612       1 pv_controller.go:879] volume \"pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273\" entered phase \"Released\"\nI1010 15:49:14.623141       1 pv_controller.go:1340] isVolumeReleased[pvc-e61120ee-ff18-4c37-bd60-8c5c29e67273]: volume is released\nI1010 15:49:14.657233       1 pv_controller_base.go:505] deletion of claim \"provisioning-2491/csi-hostpathsjqg4\" was already processed\nI1010 15:49:14.679647       1 namespace_controller.go:185] Namespace has been deleted kubectl-1624\nI1010 15:49:14.692271       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-7457\\\" or manually created by system administrator\"\nI1010 15:49:14.692298       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-7457\\\" or manually created by system administrator\"\nI1010 15:49:14.772194       1 namespace_controller.go:185] Namespace has been deleted provisioning-7163\nI1010 15:49:14.804825       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/awsvqlkh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:49:14.805103       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/awsvqlkh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:49:15.032946       1 namespace_controller.go:185] Namespace has been deleted dns-9864\nI1010 15:49:15.144560       1 namespace_controller.go:185] Namespace has been deleted projected-5714\nI1010 15:49:15.145377       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7344/csi-hostpathk54z9\"\nI1010 15:49:15.151924       1 pv_controller.go:640] volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:15.154740       1 pv_controller.go:879] volume \"pvc-4239eb73-075d-4989-993e-d59d72c02aef\" entered phase \"Released\"\nI1010 15:49:15.159076       1 pv_controller.go:1340] isVolumeReleased[pvc-4239eb73-075d-4989-993e-d59d72c02aef]: volume is released\nI1010 15:49:15.166785       1 pv_controller_base.go:505] deletion of claim \"provisioning-7344/csi-hostpathk54z9\" was already processed\nE1010 15:49:15.684177       1 tokens_controller.go:262] error synchronizing serviceaccount container-runtime-8182/default: secrets \"default-token-7jqgd\" is forbidden: unable to create new content in namespace container-runtime-8182 because it is being terminated\nI1010 15:49:15.745359       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/awsvqlkh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:49:15.745389       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-7457\\\" or manually created by system administrator\"\nW1010 15:49:16.397224       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:16.541792       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-3513/service-headless\" need=3 creating=3\nW1010 15:49:16.545479       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:16.550449       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-mfssm\"\nW1010 15:49:16.561369       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:16.562379       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-7zpnv\"\nI1010 15:49:16.565362       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-8l8bt\"\nW1010 15:49:16.567780       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:17.097895       1 namespace_controller.go:185] Namespace has been deleted provisioning-7782\nW1010 15:49:17.400938       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:17.521287       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1413/httpd\" objectUID=e5dea6cb-668e-4f6a-a75a-7cc932daadc2 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:17.535565       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-1413/httpd\" objectUID=e5dea6cb-668e-4f6a-a75a-7cc932daadc2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:18.263596       1 pv_controller.go:879] volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" entered phase \"Bound\"\nI1010 15:49:18.263669       1 pv_controller.go:982] volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" bound to claim \"fsgroupchangepolicy-3401/awsvqlkh\"\nI1010 15:49:18.272331       1 pv_controller.go:823] claim \"fsgroupchangepolicy-3401/awsvqlkh\" entered phase \"Bound\"\nI1010 15:49:18.442376       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-27cgp-df7g5\" objectUID=50dd2db0-4809-4c9c-8fd8-a2a2a57b0dfb kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.449629       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-27cgp-df7g5\" objectUID=50dd2db0-4809-4c9c-8fd8-a2a2a57b0dfb kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.461147       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-27d5b-zrr2g\" objectUID=04089818-77d2-4b7b-b185-2dca7f9e72a2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.467658       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-27d5b-zrr2g\" objectUID=04089818-77d2-4b7b-b185-2dca7f9e72a2 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.472622       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-27f8z-c4cf9\" objectUID=37b0e98f-3a1b-4bb2-ae42-ee0183e31191 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.479482       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-27f8z-c4cf9\" objectUID=37b0e98f-3a1b-4bb2-ae42-ee0183e31191 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.495277       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-27kxx-jhstl\" objectUID=1157a80b-ba9e-4b21-9a27-803b3791bc40 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.497989       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-27kxx-jhstl\" objectUID=1157a80b-ba9e-4b21-9a27-803b3791bc40 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.503702       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-2f2ql-8v8ml\" objectUID=79a5abdd-9893-41ee-b07d-d456e24a8b04 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.507534       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-2f2ql-8v8ml\" objectUID=79a5abdd-9893-41ee-b07d-d456e24a8b04 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.508284       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-2jdk8-dsm7c\" objectUID=6ddbebbe-aa6e-4884-aaa5-d642d31b51db kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.510361       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-2jdk8-dsm7c\" objectUID=6ddbebbe-aa6e-4884-aaa5-d642d31b51db kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.517826       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-2jgrj-5chtl\" objectUID=4eb41ecb-ea83-4ba6-9add-fb85658693d9 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.520315       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-2jgrj-5chtl\" objectUID=4eb41ecb-ea83-4ba6-9add-fb85658693d9 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.532127       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-479s8-qvzwv\" objectUID=99d823c6-953a-497b-b4c1-f0833eceb4ba kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.537678       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-479s8-qvzwv\" objectUID=99d823c6-953a-497b-b4c1-f0833eceb4ba kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.543892       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-47jz5-mbdqv\" objectUID=1b2df65c-22a6-4f1c-b16f-822ee2728408 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.550179       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-47jz5-mbdqv\" objectUID=1b2df65c-22a6-4f1c-b16f-822ee2728408 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.552198       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-485k4-8mlrq\" objectUID=0b677ad8-01f4-4dc7-80e8-99986ce6420a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.555497       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-485k4-8mlrq\" objectUID=0b677ad8-01f4-4dc7-80e8-99986ce6420a kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.562121       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4d9sc-hph8p\" objectUID=6506f0b8-df06-460c-9bba-513b51d79828 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.565083       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4d9sc-hph8p\" objectUID=6506f0b8-df06-460c-9bba-513b51d79828 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.569283       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4hkjt-5rsbr\" objectUID=201dfe97-78a8-41ba-8656-6980c3e7e4f8 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.572867       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4hkjt-5rsbr\" objectUID=201dfe97-78a8-41ba-8656-6980c3e7e4f8 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.577362       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4kgfp-787qr\" objectUID=4e9f2455-6b2f-43eb-b219-2d270b0c62f1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.579502       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4kgfp-787qr\" objectUID=4e9f2455-6b2f-43eb-b219-2d270b0c62f1 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.588355       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4s5dv-dsgjp\" objectUID=64bfc97a-359f-4325-9ae5-4c6676df7872 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.603877       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4s5dv-dsgjp\" objectUID=64bfc97a-359f-4325-9ae5-4c6676df7872 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.624633       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4vtrf-hgx6b\" objectUID=267fc738-f6a7-43a7-b8d5-e34eb9298d09 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.637096       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4vtrf-hgx6b\" objectUID=267fc738-f6a7-43a7-b8d5-e34eb9298d09 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.645186       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-4xrlk-ctjs7\" objectUID=6a5d7fec-bdac-44c0-92f2-c8200888688b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.650177       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-4xrlk-ctjs7\" objectUID=6a5d7fec-bdac-44c0-92f2-c8200888688b kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.652218       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7017/pvc-jfv9x\"\nI1010 15:49:18.659045       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-52l2q-g7kh9\" objectUID=87ab8200-ac7a-496c-b407-a41e03226fa1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.662628       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-52l2q-g7kh9\" objectUID=87ab8200-ac7a-496c-b407-a41e03226fa1 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.668480       1 pv_controller.go:640] volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:18.673647       1 pv_controller.go:879] volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" entered phase \"Released\"\nI1010 15:49:18.674062       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5527f-5qzs7\" objectUID=1733efbc-3c79-42e1-868e-63bcbce23c89 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.679317       1 pv_controller.go:1340] isVolumeReleased[pvc-ed6216fc-74b3-4b3f-9951-82d47a370308]: volume is released\nI1010 15:49:18.683886       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-555ns-xtvlz\" objectUID=ee6e7858-26e6-46b5-99fe-8458b601bcce kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.689294       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-592zw-sjpxw\" objectUID=99e552b4-2c92-42ce-b102-aaceb9c0a4ab kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.697400       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5527f-5qzs7\" objectUID=1733efbc-3c79-42e1-868e-63bcbce23c89 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.699386       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-59h4q-24r5n\" objectUID=c388ab3f-0843-4f73-b9eb-affcf724e637 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.705686       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5b79x-hlpdk\" objectUID=a371525c-b6f0-4538-902b-3f7f2ae7020d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.713055       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5gk6f-mbs4w\" objectUID=79d7f64a-fb85-43a7-9a8b-85719aac43dd kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.718686       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5ljgv-svr28\" objectUID=bc9edbf0-d369-49c4-a145-76ff618186d2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.729354       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5n8jl-2wsr7\" objectUID=13d8fa52-27df-4adb-9b58-383836a89e7c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.745414       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-555ns-xtvlz\" objectUID=ee6e7858-26e6-46b5-99fe-8458b601bcce kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.747350       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5q78d-hb54s\" objectUID=f3d433bd-bf42-40e0-a8d9-1898e9f29d70 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.755411       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5rm4r-x9xbz\" objectUID=fe56bd7d-a74f-4371-ac60-fa484ec1f5ba kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.763760       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5tkzw-v6n5w\" objectUID=fd3af34c-c096-4f57-b77e-840b04d5737a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.770485       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5vd7k-d5lcq\" objectUID=6c96ec3b-fce4-4fcb-b21d-897218fb3d96 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.779166       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-5zvdf-nbndl\" objectUID=d4487865-f675-43f2-ae15-22cd33d6ed38 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.795901       1 garbagecollector.go:471] \"Processing object\" object=\"dns-2418/dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\" objectUID=7ffddba5-98d5-4c1a-a9e9-e853ff4c77f1 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:18.798301       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-592zw-sjpxw\" objectUID=99e552b4-2c92-42ce-b102-aaceb9c0a4ab kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.799328       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-62dc4-qrk8q\" objectUID=6bf8e668-0fab-497f-b88a-81d579d3399e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.816726       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-62vll-hg5jr\" objectUID=fed0167e-e7d1-4714-b494-f5201ee1bea4 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.833218       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-68qmz-xvtbv\" objectUID=a7fe0348-2ca0-446a-b463-613a4fd12497 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.849563       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-6cc28-9rxjr\" objectUID=2cbd4f85-0bae-477f-af0a-789120941147 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.858871       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-6q7n9-5v6sd\" objectUID=7c07e724-2dbf-4e60-ae50-38b52abfb692 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.881118       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-6zjkd-vb88h\" objectUID=f9a66515-a2b1-4ba8-adee-454dbbf60b13 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.889128       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-75qls-mz5sb\" objectUID=42773eb7-2adc-4d0c-9941-48115bc6a004 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:18.897965       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-59h4q-24r5n\" objectUID=c388ab3f-0843-4f73-b9eb-affcf724e637 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.909977       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:18.958094       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5b79x-hlpdk\" objectUID=a371525c-b6f0-4538-902b-3f7f2ae7020d kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:18.996209       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5gk6f-mbs4w\" objectUID=79d7f64a-fb85-43a7-9a8b-85719aac43dd kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.049442       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5ljgv-svr28\" objectUID=bc9edbf0-d369-49c4-a145-76ff618186d2 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.097645       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5n8jl-2wsr7\" objectUID=13d8fa52-27df-4adb-9b58-383836a89e7c kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.157763       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-787bp-tghrr\" objectUID=2e2b97a7-3d8c-4165-bc5a-e7c6b999fde4 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:19.201828       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5q78d-hb54s\" objectUID=f3d433bd-bf42-40e0-a8d9-1898e9f29d70 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.259752       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5rm4r-x9xbz\" objectUID=fe56bd7d-a74f-4371-ac60-fa484ec1f5ba kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.306614       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5tkzw-v6n5w\" objectUID=fd3af34c-c096-4f57-b77e-840b04d5737a kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.360422       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5vd7k-d5lcq\" objectUID=6c96ec3b-fce4-4fcb-b21d-897218fb3d96 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.407749       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-5zvdf-nbndl\" objectUID=d4487865-f675-43f2-ae15-22cd33d6ed38 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.464702       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-2418/dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\" objectUID=7ffddba5-98d5-4c1a-a9e9-e853ff4c77f1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:19.511745       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7g77v-v2clv\" objectUID=ccee9205-cfc7-4c92-9dcf-e04deb4fa80e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:19.515788       1 pv_controller.go:879] volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" entered phase \"Bound\"\nI1010 15:49:19.515820       1 pv_controller.go:982] volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" bound to claim \"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:49:19.535673       1 pv_controller.go:823] claim \"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" entered phase \"Bound\"\nI1010 15:49:19.546148       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-62dc4-qrk8q\" objectUID=6bf8e668-0fab-497f-b88a-81d579d3399e kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.595360       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-62vll-hg5jr\" objectUID=fed0167e-e7d1-4714-b494-f5201ee1bea4 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.645911       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-68qmz-xvtbv\" objectUID=a7fe0348-2ca0-446a-b463-613a4fd12497 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.703684       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-6cc28-9rxjr\" objectUID=2cbd4f85-0bae-477f-af0a-789120941147 kind=\"EndpointSlice\" propagationPolicy=Background\nW1010 15:49:19.715524       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:19.720217       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7457^a1761cba-29e1-11ec-820b-c6410728b2b2\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:19.746551       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-6q7n9-5v6sd\" objectUID=7c07e724-2dbf-4e60-ae50-38b52abfb692 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.769968       1 stateful_set_control.go:521] StatefulSet statefulset-1274/ss2 terminating Pod ss2-2 for scale down\nI1010 15:49:19.780362       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI1010 15:49:19.805150       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-6zjkd-vb88h\" objectUID=f9a66515-a2b1-4ba8-adee-454dbbf60b13 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.850684       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-75qls-mz5sb\" objectUID=42773eb7-2adc-4d0c-9941-48115bc6a004 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:19.901513       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7mbgz-2567v\" objectUID=76a3f2a7-a7a6-409d-a2ec-5d2f615d680a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:19.948012       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7pxjt-nkmwk\" objectUID=67cfedba-6608-4e0a-8a56-8422e3ead582 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:19.999847       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7q6q6-9lh7t\" objectUID=7b52a86b-d32c-42b4-9bf0-1d835be9a8ef kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.057906       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7tz7h-rvns9\" objectUID=00d7afb2-7961-4777-8bf4-77a3905ef436 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.104464       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7xb9s-26vsh\" objectUID=ea29249c-5818-4e84-812e-01c71dcb698b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.154249       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-787bp-tghrr\" objectUID=2e2b97a7-3d8c-4165-bc5a-e7c6b999fde4 kind=\"EndpointSlice\" propagationPolicy=Background\nE1010 15:49:20.185132       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2491/default: secrets \"default-token-xtbcc\" is forbidden: unable to create new content in namespace provisioning-2491 because it is being terminated\nI1010 15:49:20.197514       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-829l2-h6wr8\" objectUID=398171f4-24b1-4043-b4ba-424417dccb55 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.233135       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-7235/rc-test\" need=1 creating=1\nI1010 15:49:20.238718       1 event.go:291] \"Event occurred\" object=\"replication-controller-7235/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-bq78x\"\nI1010 15:49:20.252127       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-86q5z-xqhgf\" objectUID=c751c170-e124-47ed-af3e-97c411ecfb78 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.263780       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7457^a1761cba-29e1-11ec-820b-c6410728b2b2\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:20.264427       1 event.go:291] \"Event occurred\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\\\" \"\nI1010 15:49:20.302401       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-87kjq-d224h\" objectUID=6bb8bd17-383b-46ce-a914-73831b2af90c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.353629       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-8lwl4-c8q64\" objectUID=ab3ddcbd-2eee-4103-9dc4-d3a82f7561ba kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.402128       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-8q8sl-d76pw\" objectUID=ec0489d7-8d84-4b9a-a377-ee4c02f74d05 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:20.446505       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\", UID:\"7ffddba5-98d5-4c1a-a9e9-e853ff4c77f1\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"dns-2418\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\", UID:\"89e7eaad-df92-4e9f-bbb4-732074dc13b3\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc0033cfc92)}}}: ciliumendpoints.cilium.io \"dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\" not found\nI1010 15:49:20.446556       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-8w4vd-sgp68\" objectUID=2e4124b2-83fa-4156-9878-bbbeba440900 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.497214       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7g77v-v2clv\" objectUID=ccee9205-cfc7-4c92-9dcf-e04deb4fa80e kind=\"EndpointSlice\" propagationPolicy=Background\nE1010 15:49:20.540210       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:49:20.548837       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-92gdz-fl297\" objectUID=9d82a3f4-e69f-4c86-9f5d-b446cb6f2c1e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.600559       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-949d9-7n4kj\" objectUID=8b5fc2e4-51d4-4de2-a0dc-71374611bb1b kind=\"EndpointSlice\" virtual=false\nE1010 15:49:20.632821       1 tokens_controller.go:262] error synchronizing serviceaccount proxy-1156/default: secrets \"default-token-ltff6\" is forbidden: unable to create new content in namespace proxy-1156 because it is being terminated\nI1010 15:49:20.653318       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-998p9-7zzfd\" objectUID=c9ec387f-6643-4ba4-944d-f3ba4b574667 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.685821       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-1423/agnhost-primary\" need=1 creating=1\nI1010 15:49:20.693644       1 event.go:291] \"Event occurred\" object=\"kubectl-1423/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-5k5fh\"\nI1010 15:49:20.717146       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9psll-trwpw\" objectUID=9c6674d0-6ed3-4662-be1a-d181ea9afd5a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.754998       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9rhwd-kw8jb\" objectUID=0f038fee-c07a-4f54-8b4e-400e000d32c7 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.772909       1 namespace_controller.go:185] Namespace has been deleted container-runtime-8182\nI1010 15:49:20.801469       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9rk9w-kst78\" objectUID=4b99fc03-0876-49c1-af91-d02bd2328595 kind=\"EndpointSlice\" virtual=false\nW1010 15:49:20.846049       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-1274/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:49:20.850851       1 stateful_set_control.go:521] StatefulSet statefulset-1274/ss2 terminating Pod ss2-1 for scale down\nI1010 15:49:20.883069       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9t229-vdbnz\" objectUID=c6bded5d-f188-42c7-8b5f-1b45d81c46f7 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:20.885551       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI1010 15:49:20.925291       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7mbgz-2567v\" objectUID=76a3f2a7-a7a6-409d-a2ec-5d2f615d680a kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:20.946613       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7pxjt-nkmwk\" objectUID=67cfedba-6608-4e0a-8a56-8422e3ead582 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:20.998757       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7q6q6-9lh7t\" objectUID=7b52a86b-d32c-42b4-9bf0-1d835be9a8ef kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:21.046269       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7tz7h-rvns9\" objectUID=00d7afb2-7961-4777-8bf4-77a3905ef436 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:21.096813       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-7xb9s-26vsh\" objectUID=ea29249c-5818-4e84-812e-01c71dcb698b kind=\"EndpointSlice\" propagationPolicy=Background\nE1010 15:49:21.136749       1 tokens_controller.go:262] error synchronizing serviceaccount request-timeout-9678/default: secrets \"default-token-r5mbs\" is forbidden: unable to create new content in namespace request-timeout-9678 because it is being terminated\nI1010 15:49:21.152519       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9tzrp-v64sn\" objectUID=6a44dddd-0438-4efc-a711-416198bdfbf1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.194741       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"svc-latency-3888/svc-latency-rc\" need=1 creating=1\nI1010 15:49:21.200795       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-829l2-h6wr8\" objectUID=398171f4-24b1-4043-b4ba-424417dccb55 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:21.247031       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-3888/latency-svc-86q5z-xqhgf\" objectUID=c751c170-e124-47ed-af3e-97c411ecfb78 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:21.280630       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:21.280970       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/pod-03f02e51-ab72-4038-82c5-83e2bf088627\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-25f038ca-8dba-4753-896d-a810200b92b0\\\" \"\nI1010 15:49:21.295446       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9w7zb-l7bcv\" objectUID=7bee9699-81dc-4e94-b9b6-ec2e608824dd kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.351641       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-9xchx-tx99m\" objectUID=53643933-ebd2-4572-b20c-441dfcee22c5 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.395804       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-b2xd2-chppp\" objectUID=5e7eac0e-6553-45f1-9ba1-ed02f0865b19 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.449093       1 request.go:665] Waited for 1.0023854s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-8w4vd-sgp68\nI1010 15:49:21.451794       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-b6pqs-fbvs8\" objectUID=174955ce-cb85-4553-b97e-dee149d70b40 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:21.502917       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7g77v-v2clv\", UID:\"ccee9205-cfc7-4c92-9dcf-e04deb4fa80e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7g77v\", UID:\"76d866e0-7753-4010-a1f3-03f24b620538\", Controller:(*bool)(0xc00345fbda), BlockOwnerDeletion:(*bool)(0xc00345fbdb)}}}: endpointslices.discovery.k8s.io \"latency-svc-7g77v-v2clv\" not found\nI1010 15:49:21.503142       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-b8clq-zd2q4\" objectUID=7a5d4140-7431-4574-904b-0c3d678be3da kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.545729       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bd7cq-6wcql\" objectUID=f7ad3a5b-2457-4679-9b7b-43016a58e8c0 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.607965       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bjgsb-5kfzp\" objectUID=1961816e-0cea-4e69-a5ed-af7b4af127df kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.627873       1 controller_ref_manager.go:232] patching pod kubectl-1423_agnhost-primary-5k5fh to remove its controllerRef to v1/ReplicationController:agnhost-primary\nI1010 15:49:21.643892       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-1423/agnhost-primary\" need=1 creating=1\nI1010 15:49:21.652883       1 event.go:291] \"Event occurred\" object=\"kubectl-1423/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-kptf8\"\nI1010 15:49:21.666021       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bm776-mn9v6\" objectUID=62a256f0-8fae-48cd-a5fc-716e73d96f3b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.672799       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4287\nI1010 15:49:21.711794       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bprc8-d47nv\" objectUID=80393ed1-1d99-45c9-aa28-df76c7219122 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.759779       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bqf66-fjqll\" objectUID=0e899640-7ddc-41bd-bb57-ffba51884c60 kind=\"EndpointSlice\" virtual=false\nW1010 15:49:21.791563       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:21.812177       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-bqpt6-nw66t\" objectUID=5e2a562d-7c3f-4883-b5c9-90def1cc4127 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:21.845757       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-c6h6k-sfkg4\" objectUID=5e2d5b5f-858c-4c7f-be6c-3882d96bea45 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:21.901486       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7mbgz-2567v\", UID:\"76a3f2a7-a7a6-409d-a2ec-5d2f615d680a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7mbgz\", UID:\"8f441737-9ed7-432e-ba1f-e87d62d69830\", Controller:(*bool)(0xc002f7087a), BlockOwnerDeletion:(*bool)(0xc002f7087b)}}}: endpointslices.discovery.k8s.io \"latency-svc-7mbgz-2567v\" not found\nI1010 15:49:21.901531       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-cgx7w-9ct49\" objectUID=e17068ea-92ff-4dec-9502-087b76748a43 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:21.930173       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:21.931169       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4287-7669/csi-mockplugin\nE1010 15:49:21.948062       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7pxjt-nkmwk\", UID:\"67cfedba-6608-4e0a-8a56-8422e3ead582\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7pxjt\", UID:\"3f58fbe0-13de-4dc8-b682-feb0fc1fd7a2\", Controller:(*bool)(0xc0033cfaba), BlockOwnerDeletion:(*bool)(0xc0033cfabb)}}}: endpointslices.discovery.k8s.io \"latency-svc-7pxjt-nkmwk\" not found\nI1010 15:49:21.948152       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-chnhq-jdt27\" objectUID=56d65cd1-381a-40dc-a2a1-e07dc71d1544 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:21.994897       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7q6q6-9lh7t\", UID:\"7b52a86b-d32c-42b4-9bf0-1d835be9a8ef\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7q6q6\", UID:\"1717a403-10fc-46d1-b7b6-2e859da9a7a3\", Controller:(*bool)(0xc000f5478a), BlockOwnerDeletion:(*bool)(0xc000f5478b)}}}: endpointslices.discovery.k8s.io \"latency-svc-7q6q6-9lh7t\" not found\nI1010 15:49:21.995026       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-clz6h-458dw\" objectUID=4c8e937a-7656-425c-b7e0-c252166874a3 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.044532       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7tz7h-rvns9\", UID:\"00d7afb2-7961-4777-8bf4-77a3905ef436\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7tz7h\", UID:\"b367175d-f9b9-4809-a453-c554592b662d\", Controller:(*bool)(0xc003962fde), BlockOwnerDeletion:(*bool)(0xc003962fdf)}}}: endpointslices.discovery.k8s.io \"latency-svc-7tz7h-rvns9\" not found\nI1010 15:49:22.044574       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-cqlxr-9zvtw\" objectUID=d6a4a2a3-e36c-4e55-aeb8-867a1dc40cdd kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.057359       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nE1010 15:49:22.097771       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-7xb9s-26vsh\", UID:\"ea29249c-5818-4e84-812e-01c71dcb698b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-7xb9s\", UID:\"c26437d3-d210-4479-b647-cbc3259c324b\", Controller:(*bool)(0xc0026b18de), BlockOwnerDeletion:(*bool)(0xc0026b18df)}}}: endpointslices.discovery.k8s.io \"latency-svc-7xb9s-26vsh\" not found\nI1010 15:49:22.097828       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-cqrd4-w9d7f\" objectUID=f2cbf3ab-08c9-4d5c-90a1-206aca36abe8 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.145516       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-d4wnb-hfn45\" objectUID=16c9439e-8304-4684-9fb1-01ef096f7567 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.176972       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nE1010 15:49:22.194796       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-829l2-h6wr8\", UID:\"398171f4-24b1-4043-b4ba-424417dccb55\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-829l2\", UID:\"36757204-6d25-444e-bfcc-66a5bec77a9d\", Controller:(*bool)(0xc000f9ffca), BlockOwnerDeletion:(*bool)(0xc000f9ffcb)}}}: endpointslices.discovery.k8s.io \"latency-svc-829l2-h6wr8\" not found\nI1010 15:49:22.194844       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-d68wh-cf7bb\" objectUID=c7762c46-edbc-47bf-ba20-b5f7bc8f7327 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.246519       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"latency-svc-86q5z-xqhgf\", UID:\"c751c170-e124-47ed-af3e-97c411ecfb78\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3888\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-86q5z\", UID:\"911626b5-2344-4122-8713-ac4b21f368d3\", Controller:(*bool)(0xc00235ef0a), BlockOwnerDeletion:(*bool)(0xc00235ef0b)}}}: endpointslices.discovery.k8s.io \"latency-svc-86q5z-xqhgf\" not found\nI1010 15:49:22.246565       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-d6f28-9n4c9\" objectUID=9ad01ce9-5161-4b64-9843-b849e7e5e9e5 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.294567       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dccxm-6zrv9\" objectUID=015c18da-0b49-41df-b5a7-12cfc414071d kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.306989       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:22.346537       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dcjlt-xn25h\" objectUID=e41f25fa-c75e-4822-86b2-0f7d0d6fac0a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.396310       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dddzv-9sw7b\" objectUID=c8653a46-e5fe-4d5c-b580-89aeeade08b1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.446018       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dhvwp-rk6vt\" objectUID=0d84e8e6-062c-4557-8763-c368029388a6 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.465930       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:22.503838       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dlswk-2kgjp\" objectUID=818649a5-7be3-433f-be5a-38c4b50738ea kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.544861       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dn4mx-psjdf\" objectUID=9202c730-343e-4fa3-a26f-5ceed8f56fc8 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.596818       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dwxzn-gxfn6\" objectUID=dfb54933-1b99-490f-ab69-0ebf30160f7d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.649921       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dxbjd-2qhf6\" objectUID=af0e0045-080c-48a9-9b78-122ea983e48b kind=\"EndpointSlice\" virtual=false\nE1010 15:49:22.655798       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:22.694237       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-dxkrm-kkmng\" objectUID=c38f216c-8938-403a-8d3a-5468f12cd8e3 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.746915       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-f8s42-fqf9p\" objectUID=69540b61-c0e6-4a63-8ec7-041a600a50c2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.795020       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-fj4zc-hcggs\" objectUID=e9e1ace2-72bc-400c-81b8-0519a6135734 kind=\"EndpointSlice\" virtual=false\nW1010 15:49:22.806834       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW1010 15:49:22.818922       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:49:22.846421       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-fkxzq-vmpd7\" objectUID=69c4a3fc-2b0d-4fb4-b80b-0a64bbd76b89 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.859200       1 namespace_controller.go:185] Namespace has been deleted webhook-9905\nI1010 15:49:22.899284       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-fvmsz-gwxpc\" objectUID=b1205972-433a-46ef-9a18-9dbf2faa6ec8 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:22.930981       1 pv_controller.go:879] volume \"local-pv9hcsq\" entered phase \"Available\"\nE1010 15:49:22.948753       1 tokens_controller.go:262] error synchronizing serviceaccount certificates-2257/default: secrets \"default-token-64rbq\" is forbidden: unable to create new content in namespace certificates-2257 because it is being terminated\nI1010 15:49:22.949360       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-fwgvx-6s9jw\" objectUID=2782b613-4ef7-4eb3-a24d-7622fb66ec9a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.002031       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-fwxhm-mgmv9\" objectUID=34a850b3-2b84-4984-98d8-0422e36a6428 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.051644       1 namespace_controller.go:185] Namespace has been deleted webhook-9905-markers\nI1010 15:49:23.051684       1 request.go:665] Waited for 1.007058147s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-cqlxr-9zvtw\nI1010 15:49:23.057098       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ghs6b-jmv5s\" objectUID=425f0b12-e9d9-44e5-8731-7f99f3a667eb kind=\"EndpointSlice\" virtual=false\nE1010 15:49:23.062527       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:23.073335       1 pv_controller.go:930] claim \"persistent-local-volumes-test-7155/pvc-vbb77\" bound to volume \"local-pv9hcsq\"\nI1010 15:49:23.089809       1 pv_controller.go:879] volume \"local-pv9hcsq\" entered phase \"Bound\"\nI1010 15:49:23.089842       1 pv_controller.go:982] volume \"local-pv9hcsq\" bound to claim \"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:23.106070       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-h29fx-pgbvc\" objectUID=6ec7732d-55b9-4602-9df5-974d8b9a4831 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.115659       1 pv_controller.go:823] claim \"persistent-local-volumes-test-7155/pvc-vbb77\" entered phase \"Bound\"\nI1010 15:49:23.144599       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-h9s7j-kss5f\" objectUID=37cb34eb-be66-459c-ac1c-556278087621 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.196082       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hcrvm-9kkfd\" objectUID=2e2d5051-ceca-496a-865e-6a591eb3665c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.246760       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hd2bj-vq78d\" objectUID=f9daf88b-a5af-41b8-aa52-4e9116b2db57 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.296892       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hm59j-bl5lt\" objectUID=9b0e4dc4-92d9-4882-a3ea-0b96dd875436 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.345171       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hsh56-d67xs\" objectUID=a8791ed4-9b92-41da-a1ca-9c446cff53b9 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.415981       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hsw9m-nfd6s\" objectUID=f4d4edc8-8aeb-488a-896b-c80fb087b1f3 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.446300       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hv5w5-jhf5k\" objectUID=eccd183a-2376-43f2-85a4-65be0717fbbc kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.494378       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-hzz7t-knpdr\" objectUID=ca19ad5d-caea-47aa-8b23-4a69def8cf22 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.544696       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-jkbws-p45jt\" objectUID=0b6ab464-8611-49b1-a6c0-4a8c47eb5cc1 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:23.555837       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:23.594839       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-jkcjv-gblqj\" objectUID=694c42a3-622c-464a-bec8-3205e56026d4 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.644351       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-jq66g-bctvt\" objectUID=681f7998-0b49-4e88-ac76-e84819f3e43b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.695828       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-jvhhc-l7s8k\" objectUID=f30eb594-c3b3-465e-bcfc-a26b96a95536 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.744647       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-k2bhk-zvkdf\" objectUID=0e4f96ea-b240-412a-bb78-dc6c0f8f5485 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.796816       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-kdz2c-l5m6n\" objectUID=53dccee0-be43-441e-ab8b-3dbbead18aea kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.844672       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-kk984-szchb\" objectUID=83a98200-3104-4211-800b-2b8d16f85e99 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.894685       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-kqczg-l6pr9\" objectUID=bbd8557c-56f9-4c96-986a-912c63c3577c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.944546       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-kvkpw-q2fjn\" objectUID=ac487ab4-9ac4-4c02-86e6-d991e2377138 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:23.994762       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-kwhrz-mdhpq\" objectUID=67deadbb-026a-4b3e-8466-a65054b3212c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.045711       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ldstq-w9h5j\" objectUID=aa7da541-88f2-4948-9131-00b1373c11da kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.094513       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ln4n9-7gvmf\" objectUID=3a106e25-794d-4572-a9ab-76985897da8d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.144924       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-lpjp6-dvtv9\" objectUID=5781fd17-f3c1-4689-a1d7-364c07159815 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.196169       1 request.go:665] Waited for 1.000015892s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-hcrvm-9kkfd\nI1010 15:49:24.199966       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-lvs4b-cql9k\" objectUID=8bfed423-428f-4c75-8bbc-19e9590f585d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.245731       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-lwr49-bm8gl\" objectUID=01efd47b-c882-461a-b2a3-b8e5f36630ea kind=\"EndpointSlice\" virtual=false\nE1010 15:49:24.290785       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-1413/default: secrets \"default-token-lqhsg\" is forbidden: unable to create new content in namespace kubectl-1413 because it is being terminated\nI1010 15:49:24.296282       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-lxwlr-w8qs5\" objectUID=dc016947-cc07-410a-8049-25ecf50eb594 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.348719       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-m6tsz-t4598\" objectUID=c18b963e-f53f-4cfc-b981-1089dc4691a5 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.396033       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-mf874-nhc4l\" objectUID=3b2809e4-2279-4686-a767-b43e58baad70 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.447792       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-mvdcp-vqbfs\" objectUID=be50cfd5-bacf-41f9-96b8-e51886512b28 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:24.466234       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:24.544669       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-n4st7-4dq7d\" objectUID=d69d71bf-903e-4913-abc8-b16c64a574b9 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.548608       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-n5bz2-zqjsf\" objectUID=421f104b-2397-43e5-8365-2160f4417800 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.594352       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-n5tzw-hb86r\" objectUID=f04ed3b8-613d-40f8-b6e1-977e3f3aba48 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.644480       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-n84mc-l4pzt\" objectUID=31e032b4-c87f-4076-85d0-9b300d74cf7a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.681555       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7017^4\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:24.683309       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7017^4\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:24.696546       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-n8zwn-fk76j\" objectUID=c6d4a838-3441-4f3a-95b2-edf62d825a88 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.744169       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-nbwl8-k6gzv\" objectUID=dd9d39d6-d2fd-4edf-b356-62122bd1b744 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.794443       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ndwq9-dm8wh\" objectUID=f9e745c6-2a4e-49b8-b9a2-c1b20c1f5c8e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.844351       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ndwvx-mhxlx\" objectUID=0a5d9baa-e843-4028-9701-aec592682a40 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.894428       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-nhgb8-q7zts\" objectUID=f74e2bc7-1151-401f-913f-a70633df88ef kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.944414       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-njrbr-2m6rw\" objectUID=335c3b86-0d09-488a-8ef6-78b65b4d319f kind=\"EndpointSlice\" virtual=false\nI1010 15:49:24.994368       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-nkgdk-lqpnt\" objectUID=92e9c402-338d-48ad-ac0b-b7d66a720292 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.044847       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-np8m5-bmhvl\" objectUID=29a3095d-a308-41ce-8395-a3828068cd66 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.096854       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-nts55-tng56\" objectUID=e662eade-6bad-4d0b-9314-253cbf3c8466 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.102013       1 namespace_controller.go:185] Namespace has been deleted security-context-4265\nI1010 15:49:25.144605       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-p4r4r-cxbd6\" objectUID=bfb9b563-341b-44cf-b200-6d626dd937de kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.197698       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-p8svh-9wb5f\" objectUID=2befcb32-a214-4e2c-a503-228e34cc8571 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.234555       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ed6216fc-74b3-4b3f-9951-82d47a370308\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7017^4\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:25.250080       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-p9wms-kwbwc\" objectUID=2e0d2e27-0a16-451e-ab41-dd67d34b1963 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.296478       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pbdx9-c5v94\" objectUID=bbeb449c-b537-4eb3-8cae-c1a860d694d3 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.346630       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pbfrc-96g8p\" objectUID=367b97d4-f095-4d34-a3b6-5e56b9842385 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.359524       1 namespace_controller.go:185] Namespace has been deleted provisioning-2491\nI1010 15:49:25.399849       1 request.go:665] Waited for 1.003641165s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-mf874-nhc4l\nI1010 15:49:25.401255       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pdvjx-87vr6\" objectUID=5911d367-7e8b-4648-bd6b-49bf968fc35e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.450321       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pgcsz-xkt9z\" objectUID=ac442bce-533d-4d36-8745-57b729af111c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.494063       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pjc28-ct5cb\" objectUID=73a9fbaf-fb37-4fb1-af32-4d6b358421e8 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:25.524170       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-9481/pvc-d7m2k: storageclass.storage.k8s.io \"volume-9481\" not found\nI1010 15:49:25.524489       1 event.go:291] \"Event occurred\" object=\"volume-9481/pvc-d7m2k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9481\\\" not found\"\nI1010 15:49:25.544779       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pjg9p-6wmnc\" objectUID=817edfc5-c898-41c0-9459-c0f09f59df3d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.594503       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pjtpw-sw2dl\" objectUID=467addf1-2aee-45bc-9efb-a0f6d6b6f83a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.644445       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-pjwbb-dftv5\" objectUID=e6729990-64f2-4464-b7d7-f8093e7d8911 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.671780       1 pv_controller.go:879] volume \"local-hx88h\" entered phase \"Available\"\nI1010 15:49:25.695241       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ppxxk-zqw84\" objectUID=08d8a314-d8e7-4ca7-b48f-8be2dc6b4afd kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.698079       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7017/pvc-jfv9x\" was already processed\nI1010 15:49:25.749866       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-q5mpx-l66fs\" objectUID=78094b6c-02d5-4cec-874e-32ffabb002c9 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.801629       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-qc4z9-42rx4\" objectUID=a65e7e6c-4bd1-4671-9b10-8045cc62012d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.844876       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-qc92b-r4s9s\" objectUID=b51797b2-1c70-4530-8bc7-53573a8ddc06 kind=\"EndpointSlice\" virtual=false\nE1010 15:49:25.857895       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:49:25.858533       1 namespace_controller.go:162] deletion of namespace svc-latency-3888 failed: unexpected items still remain in namespace: svc-latency-3888 for gvr: /v1, Resource=pods\nI1010 15:49:25.896791       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-qf847-fhr72\" objectUID=0d3f047b-38eb-4384-9bfd-4e6f3b7e92c0 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.944676       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-qt6kr-thjfl\" objectUID=f81da3ca-6d96-4131-a4c4-87d44b6259ad kind=\"EndpointSlice\" virtual=false\nI1010 15:49:25.998897       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-qtw7w-lldjp\" objectUID=2d7a42a9-335e-439d-80d9-625a4ec29aea kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.046454       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-rn9jc-ttzf4\" objectUID=a05aa92b-5814-4e1a-901f-c508b32c0b61 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.095391       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-rsb4g-txrbw\" objectUID=73f6a81d-b05a-4e16-b95f-4fb6097e24ed kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.128621       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-3513/service-headless-toggled\" need=3 creating=3\nI1010 15:49:26.133157       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-rrgwg\"\nI1010 15:49:26.138184       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-5g5j4\"\nI1010 15:49:26.143722       1 event.go:291] \"Event occurred\" object=\"services-3513/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-8m9qd\"\nI1010 15:49:26.148972       1 namespace_controller.go:185] Namespace has been deleted provisioning-7344\nI1010 15:49:26.158825       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-rsfkm-9vkdq\" objectUID=5d6be2e6-266e-4ec9-8c14-7ce8d4f15d40 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.198027       1 garbagecollector.go:471] \"Processing object\" object=\"dns-2418/dns-test-21e301c0-ba42-4906-98c5-9d9f9cc36c82\" objectUID=7ffddba5-98d5-4c1a-a9e9-e853ff4c77f1 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:26.244778       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-s77g5-xfd5c\" objectUID=fdbaf168-b8a0-4e20-8699-d88613a44864 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.294941       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-sjcmh-z57wj\" objectUID=e1b2a304-94d3-4776-8c1a-16bbafbf7fb6 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.305241       1 namespace_controller.go:185] Namespace has been deleted request-timeout-9678\nI1010 15:49:26.344294       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-sjjq8-stjrc\" objectUID=3dcb6604-50c7-45b5-98d0-0755f6ffe738 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.394135       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-svbmz-czln5\" objectUID=9f3055df-cad8-4ab1-bb93-61f0ff420219 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.447200       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-svv9p-q662d\" objectUID=cb968eb7-787e-43bb-8e14-b5ef8bf6316b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.498417       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-swjdx-q6s9p\" objectUID=24149000-d65a-410d-b0d1-20b3711d1bda kind=\"EndpointSlice\" virtual=false\nE1010 15:49:26.520980       1 namespace_controller.go:162] deletion of namespace cronjob-2435 failed: unexpected items still remain in namespace: cronjob-2435 for gvr: /v1, Resource=pods\nI1010 15:49:26.548290       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-sxl2j-njttj\" objectUID=c866de38-d1e8-42ce-b0bd-6e09da692bfa kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.595406       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-tjlkg-ltjv4\" objectUID=08be31cf-aefa-485f-beba-742aa509d296 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.644606       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-tk9gt-nb62t\" objectUID=f302cab3-d6b9-44f7-95f4-2a362acfc766 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.694771       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-tmhs8-28ndp\" objectUID=5f26b1b9-2242-43e3-8576-c7b737a6764e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.749025       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ttd99-l69s2\" objectUID=7d14e099-b2f1-4092-85fd-c5962c16ed3f kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.750398       1 stateful_set.go:440] StatefulSet has been deleted provisioning-2491-117/csi-hostpathplugin\nI1010 15:49:26.798597       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-tw9tk-5ccn7\" objectUID=cd4f7326-4dce-4275-8092-dd7ef2954a1e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.844677       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-v52xt-wdllc\" objectUID=44a20c62-c179-4fee-ad6f-f09a1eba0a66 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.895983       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-v927n-2q764\" objectUID=4b715d7c-8d25-41f9-afbc-23c16a8065f4 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:26.945356       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vbgp6-2css9\" objectUID=03a516d9-7127-4925-aa73-d8dd4b4baa7c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.000162       1 garbagecollector.go:471] \"Processing object\" object=\"proxy-1156/test-service-trlsr\" objectUID=7f7225d7-7a29-49d4-bde9-ddad8dff7a57 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.055860       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vbt2q-6gxkd\" objectUID=cec45ec9-4bd8-4ca6-9771-3fa089d056fe kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.095704       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vfhd5-zwcpl\" objectUID=7b05d328-1c16-455f-9447-9f11a936a52a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.137192       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-1423/agnhost-primary\" need=1 creating=1\nI1010 15:49:27.145896       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vksqn-ps8mg\" objectUID=8de16f20-a4be-4ca8-ae74-b06a1d1e291f kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.196048       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vl6fq-rgrrp\" objectUID=0921e114-c4d4-4476-b702-c9926472d1d1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.245276       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vnjvg-mdbgj\" objectUID=63711f84-4b4a-42a0-9da0-32e06f4bcb3e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.295660       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vq6dg-cwkg8\" objectUID=7863223b-9928-44f9-b97d-0a56c9a5d087 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.313890       1 stateful_set.go:440] StatefulSet has been deleted provisioning-7344-8991/csi-hostpathplugin\nE1010 15:49:27.335536       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4287-7669/default: secrets \"default-token-m47pz\" is forbidden: unable to create new content in namespace csi-mock-volumes-4287-7669 because it is being terminated\nI1010 15:49:27.345460       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vw5mb-pvnx6\" objectUID=8d0b3a76-b952-4cf4-909a-d60faa5d224c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.394133       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vwgx9-7d6cs\" objectUID=d4edfcf2-9d93-4f03-ba98-3b4457a778a2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.444141       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vwnr9-bs4zk\" objectUID=72fac002-20e4-450f-98c5-2ffe06b2321a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.499783       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-vzs6f-hpw6v\" objectUID=66cdfb95-c714-421e-b89e-727113d0a209 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.556322       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-w5c6k-kkfwp\" objectUID=eb58801a-c0e5-4aab-98a0-eb121a9ce8a2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.594218       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-w7ztm-5zp4t\" objectUID=2b856ef2-0fc4-4607-aa42-194cb7f25ccc kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.644648       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-wc2g5-xp2nx\" objectUID=c4b27605-c31d-4609-b619-46088343ea84 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.695696       1 request.go:665] Waited for 1.000762135s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-tmhs8-28ndp\nI1010 15:49:27.697076       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-wcjdh-zhwcd\" objectUID=e52715a9-3e17-43d9-ad9e-3a60c69ef84c kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.744811       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-wpj9k-fcc6h\" objectUID=94082d99-593b-45bb-86d2-7627ef8c64b0 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.794327       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-ws5vs-fb7bn\" objectUID=e6fccf91-c1ce-47df-b392-58a64014e59a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.844523       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-xbsc2-dfj5r\" objectUID=6bd9eecd-9367-4f28-8899-eb257574ac70 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.896849       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-xl6mc-bdv42\" objectUID=00e7b0ef-b311-418a-b077-ce09c9f04da2 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.944766       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-xmht8-wll4n\" objectUID=a14a221a-dda6-4365-b762-ba8ca9f26c19 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:27.994947       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-xt9fs-5vkc2\" objectUID=f54a7b85-1057-46d7-9ca4-f6e71bb10be7 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.044495       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-xvf6r-68txj\" objectUID=01cd98ce-49e1-4463-bc15-45f7b869a5df kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.092015       1 namespace_controller.go:185] Namespace has been deleted certificates-2257\nI1010 15:49:28.094628       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-z2cvb-pdlvh\" objectUID=abdefcd2-e6ba-4ea0-ad71-98878f51a102 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.145867       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-z4sn5-vs9bc\" objectUID=79c91d89-408c-4023-a7de-cab3e167966e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.194931       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zf559-4rxqb\" objectUID=1a02431e-32af-47b1-9eb5-c16dfef9064f kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.244446       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zgwm6-7bqhh\" objectUID=98a0f0de-463c-433d-a07e-7fc40e7d677a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.297379       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zjgkj-mfjgr\" objectUID=c4b0e433-57de-46f0-bb70-0a5c9be2d17d kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.347758       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zmgbs-frtlr\" objectUID=870ff8a3-7562-4d1c-8085-4293372fb5d1 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.400705       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zss6f-6lx5k\" objectUID=d57b87a9-60c1-48b9-b9ee-8603fea291e5 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.444715       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zvv2t-5x8wb\" objectUID=0e81b106-af12-4a97-8e48-0af46a9e9898 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.495190       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-zz9db-2hjkp\" objectUID=b4ce7f7e-37f8-4422-b8b3-7da81c184a40 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.545664       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7g77v-v2clv\" objectUID=ccee9205-cfc7-4c92-9dcf-e04deb4fa80e kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.595400       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1423/agnhost-primary\" objectUID=0fea98cc-04fa-4be3-9a8c-2aa4ba00dcc9 kind=\"ReplicationController\" virtual=false\nI1010 15:49:28.644476       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7mbgz-2567v\" objectUID=76a3f2a7-a7a6-409d-a2ec-5d2f615d680a kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.696913       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/svc-latency-rc-n2wwg\" objectUID=796a1b37-5f89-42f0-a473-f5eef0398035 kind=\"Pod\" virtual=false\nI1010 15:49:28.696938       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4287-7669/csi-mockplugin-6d48cf795f\" objectUID=0b8ce653-bb93-4f3d-9634-ed1b68734856 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:28.747618       1 request.go:665] Waited for 1.002746557s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1/namespaces/svc-latency-3888/endpointslices/latency-svc-wpj9k-fcc6h\nI1010 15:49:28.748906       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4287-7669/csi-mockplugin-0\" objectUID=2723a214-56d1-465f-8cd8-90ed16f673a8 kind=\"Pod\" virtual=false\nI1010 15:49:28.748945       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7pxjt-nkmwk\" objectUID=67cfedba-6608-4e0a-8a56-8422e3ead582 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.794606       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7q6q6-9lh7t\" objectUID=7b52a86b-d32c-42b4-9bf0-1d835be9a8ef kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.845521       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7tz7h-rvns9\" objectUID=00d7afb2-7961-4777-8bf4-77a3905ef436 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.894461       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-7xb9s-26vsh\" objectUID=ea29249c-5818-4e84-812e-01c71dcb698b kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.944617       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-829l2-h6wr8\" objectUID=398171f4-24b1-4043-b4ba-424417dccb55 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:28.994254       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-3888/latency-svc-86q5z-xqhgf\" objectUID=c751c170-e124-47ed-af3e-97c411ecfb78 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:29.024472       1 stateful_set_control.go:521] StatefulSet statefulset-1274/ss2 terminating Pod ss2-0 for scale down\nW1010 15:49:29.027792       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-1274/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:49:29.032701       1 event.go:291] \"Event occurred\" object=\"statefulset-1274/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:49:29.050956       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-2491-117/csi-hostpathplugin-0\" objectUID=9006cb74-301e-4e7b-9889-2cd4010de46b kind=\"Pod\" virtual=false\nI1010 15:49:29.094250       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-2491-117/csi-hostpathplugin-8575685bb5\" objectUID=6363ebc9-91f8-4bf5-b3cf-d03e63d19a87 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:29.143950       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1423/agnhost-primary-kptf8\" objectUID=9f083d40-a1cf-42c9-8ecf-faff3812ec4a kind=\"Pod\" virtual=false\nI1010 15:49:29.144040       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-7344-8991/csi-hostpathplugin-56cdbbfbdd\" objectUID=07da1b4b-265f-4100-96bb-a2176b7abed3 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:29.195363       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-7344-8991/csi-hostpathplugin-0\" objectUID=6b856f06-c8b3-4e4d-b3b5-47d01062f139 kind=\"Pod\" virtual=false\nI1010 15:49:29.335363       1 namespace_controller.go:185] Namespace has been deleted dns-2418\nI1010 15:49:29.421239       1 namespace_controller.go:185] Namespace has been deleted kubectl-1413\nI1010 15:49:30.044716       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-2491-117/csi-hostpathplugin-0\" objectUID=9006cb74-301e-4e7b-9889-2cd4010de46b kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:30.095136       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-2491-117/csi-hostpathplugin-8575685bb5\" objectUID=6363ebc9-91f8-4bf5-b3cf-d03e63d19a87 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:30.145274       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-7344-8991/csi-hostpathplugin-56cdbbfbdd\" objectUID=07da1b4b-265f-4100-96bb-a2176b7abed3 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:30.195033       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-7344-8991/csi-hostpathplugin-0\" objectUID=6b856f06-c8b3-4e4d-b3b5-47d01062f139 kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:30.745140       1 pv_controller.go:930] claim \"volume-9481/pvc-d7m2k\" bound to volume \"local-hx88h\"\nI1010 15:49:30.752528       1 pv_controller.go:879] volume \"local-hx88h\" entered phase \"Bound\"\nI1010 15:49:30.752713       1 pv_controller.go:982] volume \"local-hx88h\" bound to claim \"volume-9481/pvc-d7m2k\"\nI1010 15:49:30.760149       1 pv_controller.go:823] claim \"volume-9481/pvc-d7m2k\" entered phase \"Bound\"\nI1010 15:49:31.397741       1 expand_controller.go:289] Ignoring the PVC \"csi-mock-volumes-5498/pvc-mxl2l\" (uid: \"49fa8fdf-60b8-42c2-859e-576cf5846ae8\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI1010 15:49:31.398166       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5498/pvc-mxl2l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI1010 15:49:32.609859       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-7235/rc-test\" need=2 creating=1\nI1010 15:49:32.628664       1 event.go:291] \"Event occurred\" object=\"replication-controller-7235/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-5c6sf\"\nE1010 15:49:32.685252       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7344-8991/default: serviceaccounts \"default\" not found\nE1010 15:49:32.833651       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7017/default: secrets \"default-token-bttgq\" is forbidden: unable to create new content in namespace csi-mock-volumes-7017 because it is being terminated\nI1010 15:49:32.926629       1 pv_controller.go:879] volume \"local-pvmj8js\" entered phase \"Available\"\nI1010 15:49:33.066881       1 pv_controller.go:930] claim \"persistent-local-volumes-test-7945/pvc-27wmf\" bound to volume \"local-pvmj8js\"\nI1010 15:49:33.073715       1 pv_controller.go:879] volume \"local-pvmj8js\" entered phase \"Bound\"\nI1010 15:49:33.073749       1 pv_controller.go:982] volume \"local-pvmj8js\" bound to claim \"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:33.081231       1 pv_controller.go:823] claim \"persistent-local-volumes-test-7945/pvc-27wmf\" entered phase \"Bound\"\nI1010 15:49:33.136016       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:49:33.136574       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Suspended\" message=\"Job suspended\"\nI1010 15:49:33.141341       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nE1010 15:49:33.344629       1 tokens_controller.go:262] error synchronizing serviceaccount projected-1677/default: secrets \"default-token-9w7k4\" is forbidden: unable to create new content in namespace projected-1677 because it is being terminated\nI1010 15:49:33.543012       1 namespace_controller.go:185] Namespace has been deleted svc-latency-3888\nI1010 15:49:34.088768       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3279/pvc-j2kzh\"\nI1010 15:49:34.096127       1 pv_controller.go:640] volume \"pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:34.103200       1 pv_controller.go:879] volume \"pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a\" entered phase \"Released\"\nI1010 15:49:34.109107       1 pv_controller.go:1340] isVolumeReleased[pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a]: volume is released\nE1010 15:49:36.109530       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:49:37.299565       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1319/default: secrets \"default-token-dk7t9\" is forbidden: unable to create new content in namespace provisioning-1319 because it is being terminated\nI1010 15:49:37.507493       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4287-7669\nI1010 15:49:37.697219       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-1179/startup-b7eb9f16-a8eb-428e-b362-a11c5b8020b7\" objectUID=c7ab36cf-b137-45e7-9645-c3a6e2b37458 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:37.700407       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-1179/startup-b7eb9f16-a8eb-428e-b362-a11c5b8020b7\" objectUID=c7ab36cf-b137-45e7-9645-c3a6e2b37458 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:37.899028       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7017\nI1010 15:49:37.933254       1 namespace_controller.go:185] Namespace has been deleted provisioning-7344-8991\nI1010 15:49:38.020181       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7017-2423/csi-mockplugin\nI1010 15:49:38.020378       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-0\" objectUID=1bca95e2-53ed-48e4-98f7-960c6930bad9 kind=\"Pod\" virtual=false\nI1010 15:49:38.020407       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-5dc595b64f\" objectUID=9d17cf62-7ba9-4731-8d9c-d61644a8967a kind=\"ControllerRevision\" virtual=false\nI1010 15:49:38.022387       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-0\" objectUID=1bca95e2-53ed-48e4-98f7-960c6930bad9 kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:38.022643       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-5dc595b64f\" objectUID=9d17cf62-7ba9-4731-8d9c-d61644a8967a kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:38.163436       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7017-2423/csi-mockplugin-attacher\nI1010 15:49:38.163714       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-attacher-0\" objectUID=6bbbc4a9-078d-4ad2-93fe-cd91780e933f kind=\"Pod\" virtual=false\nI1010 15:49:38.164051       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-attacher-745975dfcc\" objectUID=38911d0d-bb9e-4379-a9ef-b9f0fe72b8cb kind=\"ControllerRevision\" virtual=false\nI1010 15:49:38.165756       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-attacher-0\" objectUID=6bbbc4a9-078d-4ad2-93fe-cd91780e933f kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:38.166083       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7017-2423/csi-mockplugin-attacher-745975dfcc\" objectUID=38911d0d-bb9e-4379-a9ef-b9f0fe72b8cb kind=\"ControllerRevision\" propagationPolicy=Background\nE1010 15:49:38.327315       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-6016/pvc-pdhbt: storageclass.storage.k8s.io \"provisioning-6016\" not found\nI1010 15:49:38.327475       1 event.go:291] \"Event occurred\" object=\"provisioning-6016/pvc-pdhbt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6016\\\" not found\"\nI1010 15:49:38.458360       1 namespace_controller.go:185] Namespace has been deleted projected-1677\nI1010 15:49:38.556284       1 pv_controller.go:879] volume \"local-rt8mc\" entered phase \"Available\"\nE1010 15:49:38.784618       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-2151/default: secrets \"default-token-x4phn\" is forbidden: unable to create new content in namespace nettest-2151 because it is being terminated\nI1010 15:49:39.749626       1 namespace_controller.go:185] Namespace has been deleted nettest-7546\nI1010 15:49:40.348606       1 stateful_set.go:440] StatefulSet has been deleted statefulset-1274/ss2\nI1010 15:49:40.348612       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-1274/ss2-5bbbc9fc94\" objectUID=0bf2ff7c-ec32-4a24-a28b-32ec57c261d1 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:40.348635       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-1274/ss2-677d6db895\" objectUID=0c511b97-ee65-4d6d-9dfb-82e134caafe9 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:40.351471       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-1274/ss2-677d6db895\" objectUID=0c511b97-ee65-4d6d-9dfb-82e134caafe9 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:40.351471       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-1274/ss2-5bbbc9fc94\" objectUID=0bf2ff7c-ec32-4a24-a28b-32ec57c261d1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:40.726361       1 pv_controller.go:879] volume \"local-pvj4c6m\" entered phase \"Available\"\nI1010 15:49:40.867064       1 pv_controller.go:930] claim \"persistent-local-volumes-test-8484/pvc-tpfjm\" bound to volume \"local-pvj4c6m\"\nI1010 15:49:40.892478       1 pv_controller.go:879] volume \"local-pvj4c6m\" entered phase \"Bound\"\nI1010 15:49:40.892513       1 pv_controller.go:982] volume \"local-pvj4c6m\" bound to claim \"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:49:40.898593       1 pv_controller.go:823] claim \"persistent-local-volumes-test-8484/pvc-tpfjm\" entered phase \"Bound\"\nI1010 15:49:41.170042       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: 80695a1b-bed8-4a6f-b78b-fe921f73098d] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1010 15:49:41.170489       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" objectUID=02574bae-0800-45bd-85f4-419e3c36a2e6 kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:41.171029       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=a46a2359-5d38-4fad-b87a-f79f407572f4 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:41.171210       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=80695a1b-bed8-4a6f-b78b-fe921f73098d kind=\"Pod\" virtual=false\nI1010 15:49:41.176060       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl-my-volume-0, uid: 02574bae-0800-45bd-85f4-419e3c36a2e6] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: 80695a1b-bed8-4a6f-b78b-fe921f73098d] is deletingDependents\nI1010 15:49:41.176205       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: a46a2359-5d38-4fad-b87a-f79f407572f4] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: 80695a1b-bed8-4a6f-b78b-fe921f73098d] is deletingDependents\nI1010 15:49:41.177762       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" objectUID=02574bae-0800-45bd-85f4-419e3c36a2e6 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI1010 15:49:41.178137       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=a46a2359-5d38-4fad-b87a-f79f407572f4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:41.182335       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7457/inline-volume-tester-sr2sl\" PVC=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:49:41.183899       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:49:41.185456       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=a46a2359-5d38-4fad-b87a-f79f407572f4 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:41.186772       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=80695a1b-bed8-4a6f-b78b-fe921f73098d kind=\"Pod\" virtual=false\nI1010 15:49:41.187663       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" objectUID=02574bae-0800-45bd-85f4-419e3c36a2e6 kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:41.190596       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl-my-volume-0, uid: 02574bae-0800-45bd-85f4-419e3c36a2e6] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: 80695a1b-bed8-4a6f-b78b-fe921f73098d] is deletingDependents\nI1010 15:49:41.190746       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" objectUID=02574bae-0800-45bd-85f4-419e3c36a2e6 kind=\"PersistentVolumeClaim\" virtual=false\nI1010 15:49:42.058656       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6060/e2e-test-webhook-6kt2j\" objectUID=27286aad-70fa-41ae-af31-ee921e9ab7c4 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:42.062624       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6060/e2e-test-webhook-6kt2j\" objectUID=27286aad-70fa-41ae-af31-ee921e9ab7c4 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:49:42.300272       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd\" objectUID=383d0f68-44ca-4f80-8b50-bad83460581f kind=\"ReplicaSet\" virtual=false\nI1010 15:49:42.300813       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-6060/sample-webhook-deployment\"\nI1010 15:49:42.322692       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-7235/rc-test\" objectUID=062d814e-dee8-492f-86d8-0e9d50875c70 kind=\"ReplicationController\" virtual=false\nI1010 15:49:42.326770       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd\" objectUID=383d0f68-44ca-4f80-8b50-bad83460581f kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:49:42.327202       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-7235/rc-test\" objectUID=062d814e-dee8-492f-86d8-0e9d50875c70 kind=\"ReplicationController\" virtual=false\nI1010 15:49:42.338965       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd-4lsbc\" objectUID=df5b591a-3919-43e1-856b-f1f8377a193d kind=\"Pod\" virtual=false\nI1010 15:49:42.344123       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd-4lsbc\" objectUID=df5b591a-3919-43e1-856b-f1f8377a193d kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:42.355505       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd-4lsbc\" objectUID=e655956a-e151-4816-9b3d-652d5c4b9a95 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:42.361643       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-6060/sample-webhook-deployment-78988fc6cd-4lsbc\" objectUID=e655956a-e151-4816-9b3d-652d5c4b9a95 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:42.431674       1 namespace_controller.go:185] Namespace has been deleted provisioning-1319\nE1010 15:49:43.351107       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-1179/default: secrets \"default-token-gz57c\" is forbidden: unable to create new content in namespace container-probe-1179 because it is being terminated\nI1010 15:49:43.944877       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09f362f56d7e23df0\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:43.949241       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09f362f56d7e23df0\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:44.187489       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-883/inline-volume-tester-7xkjn\" PVC=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:49:44.187520       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:49:44.194061       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\"\nI1010 15:49:44.200329       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883/inline-volume-tester-7xkjn\" objectUID=5efa5fca-79f3-4c65-b138-81e224af1bf9 kind=\"Pod\" virtual=false\nI1010 15:49:44.202153       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-883, name: inline-volume-tester-7xkjn, uid: 5efa5fca-79f3-4c65-b138-81e224af1bf9]\nI1010 15:49:44.202378       1 pv_controller.go:640] volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:44.210630       1 pv_controller.go:879] volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" entered phase \"Released\"\nI1010 15:49:44.215847       1 pv_controller.go:1340] isVolumeReleased[pvc-748e818d-25c5-4913-92c6-e12947d7f38b]: volume is released\nI1010 15:49:44.224693       1 pv_controller_base.go:505] deletion of claim \"ephemeral-883/inline-volume-tester-7xkjn-my-volume-0\" was already processed\nE1010 15:49:45.147678       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:49:45.462297       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-883^93523a50-29e1-11ec-a7ef-f24f52f035cd\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:49:45.466078       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-883^93523a50-29e1-11ec-a7ef-f24f52f035cd\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:49:45.621644       1 event.go:291] \"Event occurred\" object=\"webhook-1725/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1010 15:49:45.622205       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-8003/rs\" need=10 creating=10\nI1010 15:49:45.624092       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-1725/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1010 15:49:45.630526       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-whwmw\"\nI1010 15:49:45.634862       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-1725/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:49:45.636853       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-krvh7\"\nI1010 15:49:45.641973       1 event.go:291] \"Event occurred\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-zldg7\"\nI1010 15:49:45.642349       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-xw8vf\"\nI1010 15:49:45.659825       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-25dzz\"\nI1010 15:49:45.660451       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-g2vv2\"\nI1010 15:49:45.660915       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-cdhqg\"\nI1010 15:49:45.662709       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-dl4vc\"\nI1010 15:49:45.702461       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-2csdx\"\nI1010 15:49:45.702705       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-49c8p\"\nI1010 15:49:45.702882       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-cgfpm\"\nI1010 15:49:45.714380       1 pv_controller.go:879] volume \"local-pvnclcd\" entered phase \"Available\"\nI1010 15:49:45.751390       1 pv_controller.go:930] claim \"provisioning-6016/pvc-pdhbt\" bound to volume \"local-rt8mc\"\nI1010 15:49:45.757269       1 pv_controller.go:1340] isVolumeReleased[pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a]: volume is released\nI1010 15:49:45.769986       1 pv_controller.go:879] volume \"local-rt8mc\" entered phase \"Bound\"\nI1010 15:49:45.770010       1 pv_controller.go:982] volume \"local-rt8mc\" bound to claim \"provisioning-6016/pvc-pdhbt\"\nI1010 15:49:45.779732       1 pv_controller.go:823] claim \"provisioning-6016/pvc-pdhbt\" entered phase \"Bound\"\nI1010 15:49:45.855314       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2429/pvc-qnhkb\" bound to volume \"local-pvnclcd\"\nI1010 15:49:45.862337       1 pv_controller.go:879] volume \"local-pvnclcd\" entered phase \"Bound\"\nI1010 15:49:45.862363       1 pv_controller.go:982] volume \"local-pvnclcd\" bound to claim \"persistent-local-volumes-test-2429/pvc-qnhkb\"\nI1010 15:49:45.868893       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2429/pvc-qnhkb\" entered phase \"Bound\"\nI1010 15:49:46.036422       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-748e818d-25c5-4913-92c6-e12947d7f38b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-883^93523a50-29e1-11ec-a7ef-f24f52f035cd\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:49:46.139112       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-1274/test-fps5g\" objectUID=204efa5e-7f6c-4c9b-9e6f-3053d6d022be kind=\"EndpointSlice\" virtual=false\nI1010 15:49:46.144722       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-1274/test-fps5g\" objectUID=204efa5e-7f6c-4c9b-9e6f-3053d6d022be kind=\"EndpointSlice\" propagationPolicy=Background\nE1010 15:49:46.169255       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-1274/default: secrets \"default-token-l6856\" is forbidden: unable to create new content in namespace statefulset-1274 because it is being terminated\nE1010 15:49:46.454419       1 tokens_controller.go:262] error synchronizing serviceaccount exempted-namesapce/default: serviceaccounts \"default\" not found\nI1010 15:49:46.588399       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7155/pod-56b37d72-8758-4594-983c-31c8caed92fe\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:46.588453       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:46.877147       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2429/pvc-qnhkb\"\nI1010 15:49:46.885426       1 pv_controller.go:640] volume \"local-pvnclcd\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:49:46.889642       1 pv_controller.go:879] volume \"local-pvnclcd\" entered phase \"Released\"\nE1010 15:49:47.010241       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6060-markers/default: secrets \"default-token-6cwr7\" is forbidden: unable to create new content in namespace webhook-6060-markers because it is being terminated\nI1010 15:49:47.025155       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-2429/pvc-qnhkb\" was already processed\nI1010 15:49:47.456276       1 namespace_controller.go:185] Namespace has been deleted proxy-1156\nE1010 15:49:47.870005       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-7235/default: secrets \"default-token-v5pb2\" is forbidden: unable to create new content in namespace replication-controller-7235 because it is being terminated\nI1010 15:49:48.403391       1 namespace_controller.go:185] Namespace has been deleted container-probe-1179\nI1010 15:49:48.577163       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7017-2423\nE1010 15:49:48.846492       1 tokens_controller.go:262] error synchronizing serviceaccount events-1608/default: secrets \"default-token-g6z6s\" is forbidden: unable to create new content in namespace events-1608 because it is being terminated\nI1010 15:49:48.894230       1 stateful_set_control.go:555] StatefulSet statefulset-6702/ss terminating Pod ss-2 for update\nI1010 15:49:48.905189       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI1010 15:49:48.965515       1 namespace_controller.go:185] Namespace has been deleted nettest-2151\nI1010 15:49:49.313949       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-2b749043-a179-40a5-9121-5a7f3c02cc54\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:49.314134       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nW1010 15:49:49.977641       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-6702/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:49:49.990738       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nW1010 15:49:50.003611       1 reconciler.go:335] Multi-Attach error for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-42-51.sa-east-1.compute.internal and can't be attached to another\nI1010 15:49:50.004027       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1010 15:49:50.086057       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-2649/csi-hostpath4jjrh\"\nI1010 15:49:50.092169       1 pv_controller.go:640] volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:49:50.095375       1 pv_controller.go:879] volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" entered phase \"Released\"\nI1010 15:49:50.096911       1 pv_controller.go:1340] isVolumeReleased[pvc-6ad8395b-de66-4273-9fd9-79ca116cee74]: volume is released\nI1010 15:49:50.110309       1 pv_controller_base.go:505] deletion of claim \"volume-2649/csi-hostpath4jjrh\" was already processed\nI1010 15:49:50.592778       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7155/pod-56b37d72-8758-4594-983c-31c8caed92fe\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:50.592804       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:50.773633       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7155/pod-56b37d72-8758-4594-983c-31c8caed92fe\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:50.773658       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:50.778984       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7155/pvc-vbb77\"\nI1010 15:49:50.796651       1 pv_controller.go:640] volume \"local-pv9hcsq\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:49:50.800311       1 pv_controller.go:879] volume \"local-pv9hcsq\" entered phase \"Released\"\nI1010 15:49:50.804402       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-7155/pvc-vbb77\" was already processed\nW1010 15:49:50.913142       1 reconciler.go:335] Multi-Attach error for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-33-168.sa-east-1.compute.internal and can't be attached to another\nI1010 15:49:50.913647       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/pod-c98292a3-23cb-4c2d-856b-a9f7a1900978\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-25f038ca-8dba-4753-896d-a810200b92b0\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1010 15:49:51.221518       1 namespace_controller.go:185] Namespace has been deleted statefulset-1274\nI1010 15:49:51.223146       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:49:51.227929       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:49:51.462779       1 namespace_controller.go:185] Namespace has been deleted exempted-namesapce\nI1010 15:49:51.913729       1 namespace_controller.go:185] Namespace has been deleted webhook-6060\nI1010 15:49:52.039888       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:49:52.088424       1 namespace_controller.go:185] Namespace has been deleted webhook-6060-markers\nI1010 15:49:52.379074       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-39ef274a-6223-4b9c-b105-11ea1d076405\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:52.380144       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:52.578250       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-39ef274a-6223-4b9c-b105-11ea1d076405\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:52.579026       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:52.585539       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-39ef274a-6223-4b9c-b105-11ea1d076405\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:52.585561       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nE1010 15:49:53.650039       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nE1010 15:49:53.760245       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nE1010 15:49:53.916755       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-2429/default: secrets \"default-token-kxffm\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2429 because it is being terminated\nE1010 15:49:53.920584       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:54.053628       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:49:54.056350       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nE1010 15:49:54.131197       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:54.184703       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-39ef274a-6223-4b9c-b105-11ea1d076405\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:54.184728       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nE1010 15:49:54.271209       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:54.274424       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/verify-service-up-exec-pod-lpdd9\" objectUID=5c335d86-fd3d-4158-a085-4709484d402c kind=\"CiliumEndpoint\" virtual=false\nI1010 15:49:54.297792       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/verify-service-up-exec-pod-lpdd9\" objectUID=5c335d86-fd3d-4158-a085-4709484d402c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:49:54.381629       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-7945/pod-39ef274a-6223-4b9c-b105-11ea1d076405\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:54.382387       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:54.394356       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-7945/pvc-27wmf\"\nI1010 15:49:54.402799       1 pv_controller.go:640] volume \"local-pvmj8js\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:49:54.407329       1 pv_controller.go:879] volume \"local-pvmj8js\" entered phase \"Released\"\nI1010 15:49:54.412737       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-7945/pvc-27wmf\" was already processed\nE1010 15:49:54.488382       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:54.513044       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE1010 15:49:54.775514       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:54.862700       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:54.882549       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-2649^9cb7f3bc-29e1-11ec-9237-7eb4be312204\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:54.882822       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:54.894011       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-2649^9cb7f3bc-29e1-11ec-9237-7eb4be312204\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:49:55.116818       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:49:55.202037       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:55.435822       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6ad8395b-de66-4273-9fd9-79ca116cee74\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-2649^9cb7f3bc-29e1-11ec-9237-7eb4be312204\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:49:56.034732       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:56.518374       1 namespace_controller.go:185] Namespace has been deleted ephemeral-883\nI1010 15:49:56.699276       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-8003/rs\" need=10 creating=1\nI1010 15:49:56.710779       1 event.go:291] \"Event occurred\" object=\"disruption-8003/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-2cxsx\"\nI1010 15:49:56.846394       1 pv_controller.go:1340] isVolumeReleased[pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a]: volume is released\nI1010 15:49:57.144344       1 pv_controller_base.go:505] deletion of claim \"provisioning-3279/pvc-j2kzh\" was already processed\nI1010 15:49:57.147647       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-be6f307f-3032-4e99-9137-ea66cb0bed4a\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09f362f56d7e23df0\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nE1010 15:49:57.693257       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:49:58.149899       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:49:58.151957       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883-466/csi-hostpathplugin-bc5dfcb5\" objectUID=1fce0558-a35c-481a-83c0-90cc85233e76 kind=\"ControllerRevision\" virtual=false\nI1010 15:49:58.152213       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-883-466/csi-hostpathplugin\nI1010 15:49:58.152270       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-883-466/csi-hostpathplugin-0\" objectUID=51d1392a-69fd-4b60-b086-fa14974e5609 kind=\"Pod\" virtual=false\nI1010 15:49:58.155543       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-883-466/csi-hostpathplugin-0\" objectUID=51d1392a-69fd-4b60-b086-fa14974e5609 kind=\"Pod\" propagationPolicy=Background\nI1010 15:49:58.158004       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-883-466/csi-hostpathplugin-bc5dfcb5\" objectUID=1fce0558-a35c-481a-83c0-90cc85233e76 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:49:58.226430       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:49:58.443117       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8484/pod-ceb51e52-b88e-4eea-a30d-cfe2e76b3711\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:49:58.443330       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:49:58.613104       1 namespace_controller.go:185] Namespace has been deleted provisioning-2491-117\nI1010 15:49:58.627556       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE1010 15:49:58.855095       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:49:58.975693       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2429\nI1010 15:49:59.869060       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1725/e2e-test-webhook-5ddf4\" objectUID=926335eb-c006-4369-81f4-db69a8de48d3 kind=\"EndpointSlice\" virtual=false\nI1010 15:49:59.883684       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1725/e2e-test-webhook-5ddf4\" objectUID=926335eb-c006-4369-81f4-db69a8de48d3 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:50:00.031724       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd\" objectUID=7c277255-60d5-415c-8599-95685a17203a kind=\"ReplicaSet\" virtual=false\nI1010 15:50:00.031982       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1725/sample-webhook-deployment\"\nI1010 15:50:00.039279       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd\" objectUID=7c277255-60d5-415c-8599-95685a17203a kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:50:00.052198       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd-zldg7\" objectUID=4c45d4a8-400c-4eb1-9257-df3b8bdf7d3a kind=\"Pod\" virtual=false\nI1010 15:50:00.055815       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd-zldg7\" objectUID=4c45d4a8-400c-4eb1-9257-df3b8bdf7d3a kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:00.075915       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd-zldg7\" objectUID=9bc55744-e0b1-4b88-8912-19fc4a485dd6 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:00.091560       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-1725/sample-webhook-deployment-78988fc6cd-zldg7\" objectUID=9bc55744-e0b1-4b88-8912-19fc4a485dd6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:50:00.188697       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8484/pod-ceb51e52-b88e-4eea-a30d-cfe2e76b3711\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:50:00.189211       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:50:00.307880       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8484/pod-ceb51e52-b88e-4eea-a30d-cfe2e76b3711\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:50:00.307907       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:50:00.337639       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-8484/pvc-tpfjm\"\nI1010 15:50:00.353676       1 pv_controller.go:640] volume \"local-pvj4c6m\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:00.358744       1 pv_controller.go:879] volume \"local-pvj4c6m\" entered phase \"Released\"\nI1010 15:50:00.365622       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-8484/pvc-tpfjm\" was already processed\nE1010 15:50:00.461497       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:50:00.752491       1 namespace_controller.go:185] Namespace has been deleted volume-2649\nI1010 15:50:01.204269       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7155\nI1010 15:50:01.209679       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7945\nI1010 15:50:01.388040       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:01.393180       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:01.396089       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local--1-th5b8\"\nI1010 15:50:01.410193       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:01.411338       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local--1-b5prf\"\nI1010 15:50:01.422776       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:01.424045       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:01.429630       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:02.029891       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-cdhqg\" objectUID=e44714ab-5005-48b5-a5c1-a4bfc75ae161 kind=\"Pod\" virtual=false\nI1010 15:50:02.030197       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-2cxsx\" objectUID=8a852427-e54b-4c93-b278-c7521c946db7 kind=\"Pod\" virtual=false\nI1010 15:50:02.030361       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-whwmw\" objectUID=6a23a7ce-0dac-4070-a456-085ef34f22a8 kind=\"Pod\" virtual=false\nI1010 15:50:02.030501       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-krvh7\" objectUID=ca91a822-fbf0-4388-8ab2-f2153dd29d21 kind=\"Pod\" virtual=false\nI1010 15:50:02.030638       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-g2vv2\" objectUID=250d6227-33e5-4232-8aee-d0118653e737 kind=\"Pod\" virtual=false\nI1010 15:50:02.030768       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-2csdx\" objectUID=a7f697b6-35f0-490b-ad61-b1226499af43 kind=\"Pod\" virtual=false\nI1010 15:50:02.030906       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-49c8p\" objectUID=92026478-d157-4b7d-ad89-9be7de7568e6 kind=\"Pod\" virtual=false\nI1010 15:50:02.031039       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-xw8vf\" objectUID=981ee17e-c865-498b-a370-12d50e495c6f kind=\"Pod\" virtual=false\nI1010 15:50:02.031177       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-dl4vc\" objectUID=ed331544-3ab8-4201-bdeb-3ab23477c832 kind=\"Pod\" virtual=false\nI1010 15:50:02.031309       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-8003/rs-25dzz\" objectUID=0869149c-6d0a-4561-b98f-7db240b0701f kind=\"Pod\" virtual=false\nI1010 15:50:02.036469       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-49c8p\" objectUID=92026478-d157-4b7d-ad89-9be7de7568e6 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.036891       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-whwmw\" objectUID=6a23a7ce-0dac-4070-a456-085ef34f22a8 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.037213       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-dl4vc\" objectUID=ed331544-3ab8-4201-bdeb-3ab23477c832 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.037548       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-25dzz\" objectUID=0869149c-6d0a-4561-b98f-7db240b0701f kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.037866       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-2csdx\" objectUID=a7f697b6-35f0-490b-ad61-b1226499af43 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.038181       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-2cxsx\" objectUID=8a852427-e54b-4c93-b278-c7521c946db7 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.040315       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-g2vv2\" objectUID=250d6227-33e5-4232-8aee-d0118653e737 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.040621       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-xw8vf\" objectUID=981ee17e-c865-498b-a370-12d50e495c6f kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.040672       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-krvh7\" objectUID=ca91a822-fbf0-4388-8ab2-f2153dd29d21 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.040733       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-8003/rs-cdhqg\" objectUID=e44714ab-5005-48b5-a5c1-a4bfc75ae161 kind=\"Pod\" propagationPolicy=Background\nE1010 15:50:02.076139       1 disruption.go:581] Failed to sync pdb disruption-8003/foo: replicasets.apps does not implement the scale subresource\nE1010 15:50:02.089764       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.16acb61d5e01ae99\", GenerateName:\"\", Namespace:\"disruption-8003\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-8003\", Name:\"foo\", UID:\"3bab713b-9454-4e00-af7c-37a2826a8608\", APIVersion:\"policy/v1\", ResourceVersion:\"14983\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: replicasets.apps does not implement the scale subresource\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca84898a99, ext:615038428375, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca84898a99, ext:615038428375, loc:(*time.Location)(0x750cdc0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.16acb61d5e01ae99\" is forbidden: unable to create new content in namespace disruption-8003 because it is being terminated' (will not retry!)\nE1010 15:50:02.097900       1 disruption.go:581] Failed to sync pdb disruption-8003/foo: replicasets.apps does not implement the scale subresource\nE1010 15:50:02.100689       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-8003/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nE1010 15:50:02.103647       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.16acb61d5e01ae99\", GenerateName:\"\", Namespace:\"disruption-8003\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-8003\", Name:\"foo\", UID:\"3bab713b-9454-4e00-af7c-37a2826a8608\", APIVersion:\"policy/v1\", ResourceVersion:\"14983\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: replicasets.apps does not implement the scale subresource\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca84898a99, ext:615038428375, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca85d5a272, ext:615060192434, loc:(*time.Location)(0x750cdc0)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.16acb61d5e01ae99\" is forbidden: unable to create new content in namespace disruption-8003 because it is being terminated' (will not retry!)\nE1010 15:50:02.104892       1 disruption.go:581] Failed to sync pdb disruption-8003/foo: replicasets.apps does not implement the scale subresource\nE1010 15:50:02.109682       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-8003/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": StorageError: invalid object, Code: 4, Key: /registry/poddisruptionbudgets/disruption-8003/foo, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3bab713b-9454-4e00-af7c-37a2826a8608, UID in object meta: \nE1010 15:50:02.110618       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.16acb61d5e01ae99\", GenerateName:\"\", Namespace:\"disruption-8003\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-8003\", Name:\"foo\", UID:\"3bab713b-9454-4e00-af7c-37a2826a8608\", APIVersion:\"policy/v1\", ResourceVersion:\"15266\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: replicasets.apps does not implement the scale subresource\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca84898a99, ext:615038428375, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc050e0ca86405f86, ext:615067187647, loc:(*time.Location)(0x750cdc0)}}, Count:3, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.16acb61d5e01ae99\" is forbidden: unable to create new content in namespace disruption-8003 because it is being terminated' (will not retry!)\nI1010 15:50:02.387092       1 garbagecollector.go:471] \"Processing object\" object=\"volume-2649-4755/csi-hostpathplugin-669f6f7478\" objectUID=f5b9692c-b7fa-4017-ae9c-8ce3a71fcca1 kind=\"ControllerRevision\" virtual=false\nI1010 15:50:02.387495       1 stateful_set.go:440] StatefulSet has been deleted volume-2649-4755/csi-hostpathplugin\nI1010 15:50:02.387636       1 garbagecollector.go:471] \"Processing object\" object=\"volume-2649-4755/csi-hostpathplugin-0\" objectUID=80c63223-820e-4414-b341-5a458cfae9cb kind=\"Pod\" virtual=false\nI1010 15:50:02.389413       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-2649-4755/csi-hostpathplugin-669f6f7478\" objectUID=f5b9692c-b7fa-4017-ae9c-8ce3a71fcca1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:02.389938       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-2649-4755/csi-hostpathplugin-0\" objectUID=80c63223-820e-4414-b341-5a458cfae9cb kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:02.656539       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:02.668691       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:02.718403       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:02.718609       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-8660/pod-bf554662-8b10-4263-a490-efefcd75c20f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\\\" \"\nE1010 15:50:02.727425       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW1010 15:50:02.733399       1 utils.go:265] Service services-3513/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:50:03.158356       1 event.go:291] \"Event occurred\" object=\"volume-expand-1028-5375/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1010 15:50:03.211660       1 namespace_controller.go:185] Namespace has been deleted replication-controller-7235\nI1010 15:50:03.604112       1 event.go:291] \"Event occurred\" object=\"volume-expand-1028/csi-hostpath6frrr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-1028\\\" or manually created by system administrator\"\nI1010 15:50:03.652075       1 stateful_set_control.go:555] StatefulSet statefulset-2594/ss2 terminating Pod ss2-2 for update\nI1010 15:50:03.657960       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI1010 15:50:03.728647       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:03.735557       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-true-to-false--1-57kkg\"\nI1010 15:50:03.735565       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:03.741405       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:03.743639       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:03.741759       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-true-to-false--1-nqjxv\"\nI1010 15:50:03.743840       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Resumed\" message=\"Job resumed\"\nI1010 15:50:03.746865       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:03.753771       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Resumed\" message=\"Job resumed\"\nI1010 15:50:03.754017       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:04.821163       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:04.826233       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:04.827061       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local--1-54f86\"\nI1010 15:50:04.833393       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nE1010 15:50:04.833481       1 job_controller.go:441] Error syncing job: failed pod(s) detected for job key \"job-2076/fail-once-non-local\"\nE1010 15:50:04.942909       1 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-3086/default: secrets \"default-token-fn6c2\" is forbidden: unable to create new content in namespace metrics-grabber-3086 because it is being terminated\nI1010 15:50:05.060641       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a2ec7eb170a3a95b\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:05.060731       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fbfbd524-f1fe-4cb2-8986-02e7fed4b7e0\\\" \"\nI1010 15:50:05.087699       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6016/pvc-pdhbt\"\nI1010 15:50:05.094602       1 pv_controller.go:640] volume \"local-rt8mc\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:05.098986       1 pv_controller.go:879] volume \"local-rt8mc\" entered phase \"Released\"\nI1010 15:50:05.134401       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-9481/pvc-d7m2k\"\nI1010 15:50:05.138554       1 pv_controller.go:640] volume \"local-hx88h\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:05.141413       1 pv_controller.go:879] volume \"local-hx88h\" entered phase \"Released\"\nI1010 15:50:05.257602       1 pv_controller_base.go:505] deletion of claim \"provisioning-6016/pvc-pdhbt\" was already processed\nI1010 15:50:05.311805       1 pv_controller_base.go:505] deletion of claim \"volume-9481/pvc-d7m2k\" was already processed\nI1010 15:50:05.417939       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:05.423110       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:05.423609       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local--1-8fbbd\"\nI1010 15:50:05.428815       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nE1010 15:50:05.855414       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3279/default: secrets \"default-token-2vvd9\" is forbidden: unable to create new content in namespace provisioning-3279 because it is being terminated\nE1010 15:50:05.905438       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nE1010 15:50:05.928026       1 namespace_controller.go:162] deletion of namespace cronjob-2435 failed: unexpected items still remain in namespace: cronjob-2435 for gvr: /v1, Resource=pods\nI1010 15:50:06.167771       1 pv_controller.go:879] volume \"local-pvzg2j7\" entered phase \"Available\"\nI1010 15:50:06.309118       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4324/pvc-z484l\" bound to volume \"local-pvzg2j7\"\nI1010 15:50:06.324464       1 pv_controller.go:879] volume \"local-pvzg2j7\" entered phase \"Bound\"\nI1010 15:50:06.324545       1 pv_controller.go:982] volume \"local-pvzg2j7\" bound to claim \"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:06.331611       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4324/pvc-z484l\" entered phase \"Bound\"\nI1010 15:50:06.817866       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:07.177592       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:07.241233       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nE1010 15:50:07.711861       1 tokens_controller.go:262] error synchronizing serviceaccount volume-2649-4755/default: secrets \"default-token-zjpbq\" is forbidden: unable to create new content in namespace volume-2649-4755 because it is being terminated\nI1010 15:50:07.821439       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:08.418376       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:08.423130       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local--1-6mswl\"\nI1010 15:50:08.423370       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:08.431766       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:08.719638       1 namespace_controller.go:185] Namespace has been deleted ephemeral-883-466\nI1010 15:50:08.912747       1 event.go:291] \"Event occurred\" object=\"resourcequota-3377/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:50:09.222017       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:09.227340       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:09.261889       1 pv_controller.go:879] volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" entered phase \"Bound\"\nI1010 15:50:09.262129       1 pv_controller.go:982] volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" bound to claim \"volume-expand-1028/csi-hostpath6frrr\"\nI1010 15:50:09.268084       1 pv_controller.go:823] claim \"volume-expand-1028/csi-hostpath6frrr\" entered phase \"Bound\"\nI1010 15:50:09.454462       1 namespace_controller.go:185] Namespace has been deleted limitrange-2497\nI1010 15:50:09.624729       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:09.624886       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-3401/pod-c98292a3-23cb-4c2d-856b-a9f7a1900978\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-25f038ca-8dba-4753-896d-a810200b92b0\\\" \"\nI1010 15:50:09.779015       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:09.786211       1 namespace_controller.go:185] Namespace has been deleted webhook-1725\nI1010 15:50:09.905967       1 namespace_controller.go:185] Namespace has been deleted webhook-1725-markers\nI1010 15:50:10.018438       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:10.018949       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-3086\nI1010 15:50:10.182279       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:10.670257       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1028^bf1dc328-29e1-11ec-9d2b-3e943e804a0c\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:10.989113       1 namespace_controller.go:185] Namespace has been deleted provisioning-3279\nI1010 15:50:11.018900       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:11.019542       1 event.go:291] \"Event occurred\" object=\"job-2076/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI1010 15:50:11.025051       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:11.201614       1 event.go:291] \"Event occurred\" object=\"resourcequota-3377/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:50:11.203436       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"resourcequota-3377/test-claim\"\nI1010 15:50:11.223262       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1028^bf1dc328-29e1-11ec-9d2b-3e943e804a0c\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:11.223433       1 event.go:291] \"Event occurred\" object=\"volume-expand-1028/pod-c497a261-67df-48ac-b29c-db00fb1ed54c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b7ee2f91-45c3-428e-b167-76d642283487\\\" \"\nI1010 15:50:11.672374       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7457/inline-volume-tester-sr2sl\" PVC=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:50:11.672405       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:50:11.682341       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\"\nI1010 15:50:11.689554       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457/inline-volume-tester-sr2sl\" objectUID=80695a1b-bed8-4a6f-b78b-fe921f73098d kind=\"Pod\" virtual=false\nI1010 15:50:11.691543       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7457, name: inline-volume-tester-sr2sl, uid: 80695a1b-bed8-4a6f-b78b-fe921f73098d]\nI1010 15:50:11.692233       1 pv_controller.go:640] volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:11.699619       1 pv_controller.go:879] volume \"pvc-02574bae-0800-45bd-85f4-419e3c36a2e6\" entered phase \"Released\"\nI1010 15:50:11.704895       1 pv_controller.go:1340] isVolumeReleased[pvc-02574bae-0800-45bd-85f4-419e3c36a2e6]: volume is released\nI1010 15:50:11.713903       1 pv_controller_base.go:505] deletion of claim \"ephemeral-7457/inline-volume-tester-sr2sl-my-volume-0\" was already processed\nI1010 15:50:11.984543       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:11.988687       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:11.989509       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-true-to-false--1-ggnr9\"\nI1010 15:50:11.995826       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:11.998131       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:12.027213       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nE1010 15:50:12.129883       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6016/default: secrets \"default-token-zw8c4\" is forbidden: unable to create new content in namespace provisioning-6016 because it is being terminated\nI1010 15:50:12.532854       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8484\nI1010 15:50:13.189790       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI1010 15:50:13.220150       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1010 15:50:13.362087       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1010 15:50:13.758394       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-5131/affinity-nodeport-transition\" need=3 creating=3\nI1010 15:50:13.764526       1 event.go:291] \"Event occurred\" object=\"services-5131/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-gd66d\"\nI1010 15:50:13.773653       1 event.go:291] \"Event occurred\" object=\"services-5131/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-srw86\"\nI1010 15:50:13.773793       1 event.go:291] \"Event occurred\" object=\"services-5131/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-gszw4\"\nI1010 15:50:14.397027       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-2594/ss2-0\" objectUID=ed53c5f3-f38b-4ce7-ae0d-61d8cb540971 kind=\"CiliumEndpoint\" virtual=false\nW1010 15:50:14.399684       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-2594/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:50:14.402960       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-2594/ss2-0\" objectUID=ed53c5f3-f38b-4ce7-ae0d-61d8cb540971 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:50:14.408161       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:50:14.620963       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:14.624227       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:14.626487       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: suspend-true-to-false--1-pjc64\"\nI1010 15:50:14.631520       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:14.643731       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nE1010 15:50:15.498251       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7347/pvc-s8bc5: storageclass.storage.k8s.io \"provisioning-7347\" not found\nI1010 15:50:15.498865       1 event.go:291] \"Event occurred\" object=\"provisioning-7347/pvc-s8bc5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7347\\\" not found\"\nI1010 15:50:15.618907       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:15.647990       1 pv_controller.go:879] volume \"local-qndhz\" entered phase \"Available\"\nI1010 15:50:15.749449       1 pv_controller.go:930] claim \"provisioning-7347/pvc-s8bc5\" bound to volume \"local-qndhz\"\nI1010 15:50:15.756390       1 pv_controller.go:879] volume \"local-qndhz\" entered phase \"Bound\"\nI1010 15:50:15.756419       1 pv_controller.go:982] volume \"local-qndhz\" bound to claim \"provisioning-7347/pvc-s8bc5\"\nI1010 15:50:15.762996       1 pv_controller.go:823] claim \"provisioning-7347/pvc-s8bc5\" entered phase \"Bound\"\nI1010 15:50:16.052594       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-9906/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1010 15:50:16.056166       1 event.go:291] \"Event occurred\" object=\"webhook-9906/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1010 15:50:16.065067       1 event.go:291] \"Event occurred\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-qj8c7\"\nI1010 15:50:16.078451       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9906/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:50:16.087038       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9906/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1010 15:50:16.253272       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:50:16.779880       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:16.784851       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:16.900506       1 garbagecollector.go:471] \"Processing object\" object=\"job-2076/fail-once-non-local--1-8fbbd\" objectUID=1fc1b095-fc05-4085-80fd-fda2a4979223 kind=\"Pod\" virtual=false\nI1010 15:50:16.900816       1 job_controller.go:406] enqueueing job job-2076/fail-once-non-local\nI1010 15:50:16.900954       1 garbagecollector.go:471] \"Processing object\" object=\"job-2076/fail-once-non-local--1-6mswl\" objectUID=cf2b1bf7-7b35-4b3b-b766-5883eb9446d4 kind=\"Pod\" virtual=false\nI1010 15:50:16.901167       1 garbagecollector.go:471] \"Processing object\" object=\"job-2076/fail-once-non-local--1-th5b8\" objectUID=819eeb96-d005-4f98-b9f9-0a4e7e01a715 kind=\"Pod\" virtual=false\nI1010 15:50:16.901397       1 garbagecollector.go:471] \"Processing object\" object=\"job-2076/fail-once-non-local--1-b5prf\" objectUID=e83c0c17-0d4b-41f9-99fd-7e4ce1b225be kind=\"Pod\" virtual=false\nI1010 15:50:16.901736       1 garbagecollector.go:471] \"Processing object\" object=\"job-2076/fail-once-non-local--1-54f86\" objectUID=375e4429-bfd4-4fa4-9bbe-57fd49757601 kind=\"Pod\" virtual=false\nI1010 15:50:16.915997       1 garbagecollector.go:580] \"Deleting object\" object=\"job-2076/fail-once-non-local--1-b5prf\" objectUID=e83c0c17-0d4b-41f9-99fd-7e4ce1b225be kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:16.916285       1 garbagecollector.go:580] \"Deleting object\" object=\"job-2076/fail-once-non-local--1-6mswl\" objectUID=cf2b1bf7-7b35-4b3b-b766-5883eb9446d4 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:16.916561       1 garbagecollector.go:580] \"Deleting object\" object=\"job-2076/fail-once-non-local--1-th5b8\" objectUID=819eeb96-d005-4f98-b9f9-0a4e7e01a715 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:16.916831       1 garbagecollector.go:580] \"Deleting object\" object=\"job-2076/fail-once-non-local--1-54f86\" objectUID=375e4429-bfd4-4fa4-9bbe-57fd49757601 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:16.917099       1 garbagecollector.go:580] \"Deleting object\" object=\"job-2076/fail-once-non-local--1-8fbbd\" objectUID=1fc1b095-fc05-4085-80fd-fda2a4979223 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:17.220619       1 namespace_controller.go:185] Namespace has been deleted provisioning-6016\nI1010 15:50:18.033297       1 namespace_controller.go:185] Namespace has been deleted volume-2649-4755\nI1010 15:50:18.586047       1 stateful_set_control.go:555] StatefulSet statefulset-6702/ss terminating Pod ss-1 for update\nI1010 15:50:18.603064       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE1010 15:50:18.742976       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-3377/default: serviceaccounts \"default\" not found\nI1010 15:50:18.770900       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-3377/test-quota\nE1010 15:50:18.811802       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4714/default: secrets \"default-token-zzrnh\" is forbidden: unable to create new content in namespace provisioning-4714 because it is being terminated\nE1010 15:50:19.078517       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-7457/default: secrets \"default-token-s98n5\" is forbidden: unable to create new content in namespace ephemeral-7457 because it is being terminated\nI1010 15:50:19.210325       1 namespace_controller.go:185] Namespace has been deleted volume-9481\nI1010 15:50:19.360650       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nW1010 15:50:19.458858       1 reconciler.go:335] Multi-Attach error for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-33-168.sa-east-1.compute.internal and can't be attached to another\nI1010 15:50:19.459062       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nE1010 15:50:20.406524       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-5843/pvc-9brfx: storageclass.storage.k8s.io \"provisioning-5843\" not found\nI1010 15:50:20.406689       1 event.go:291] \"Event occurred\" object=\"provisioning-5843/pvc-9brfx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5843\\\" not found\"\nI1010 15:50:20.557280       1 pv_controller.go:879] volume \"local-6nt4s\" entered phase \"Available\"\nI1010 15:50:20.817988       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:20.819018       1 event.go:291] \"Event occurred\" object=\"job-4122/suspend-true-to-false\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI1010 15:50:20.823483       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:21.127787       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-1681/busybox-780cabdb-33c6-4f42-a1b9-94931b19171b\" objectUID=b9d7df40-cc60-47a8-80f8-cf578a35f639 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:21.141838       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-1681/busybox-780cabdb-33c6-4f42-a1b9-94931b19171b\" objectUID=b9d7df40-cc60-47a8-80f8-cf578a35f639 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:50:21.450737       1 tokens_controller.go:262] error synchronizing serviceaccount containers-7588/default: secrets \"default-token-dfvz5\" is forbidden: unable to create new content in namespace containers-7588 because it is being terminated\nE1010 15:50:21.556791       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nW1010 15:50:21.697244       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-2594/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:50:21.697703       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE1010 15:50:21.709924       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nE1010 15:50:21.841280       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nE1010 15:50:21.988407       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nI1010 15:50:22.267757       1 namespace_controller.go:185] Namespace has been deleted job-2076\nI1010 15:50:22.398553       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-4774/pvc-d8fh4\"\nI1010 15:50:22.415738       1 pv_controller.go:640] volume \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:22.424000       1 pv_controller.go:879] volume \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\" entered phase \"Released\"\nI1010 15:50:22.427753       1 pv_controller.go:1340] isVolumeReleased[pvc-d4d470a6-485c-4e19-97d9-d228b21c582c]: volume is released\nE1010 15:50:22.487769       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nE1010 15:50:22.721309       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nI1010 15:50:22.928711       1 namespace_controller.go:185] Namespace has been deleted emptydir-6282\nE1010 15:50:23.077333       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nE1010 15:50:23.630712       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nI1010 15:50:23.747080       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4394/pvc-rbv7l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4394\\\" or manually created by system administrator\"\nI1010 15:50:23.752097       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4394/pvc-rbv7l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4394\\\" or manually created by system administrator\"\nI1010 15:50:23.799750       1 pv_controller.go:879] volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" entered phase \"Bound\"\nI1010 15:50:23.799898       1 pv_controller.go:982] volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" bound to claim \"csi-mock-volumes-4394/pvc-rbv7l\"\nI1010 15:50:23.839798       1 pv_controller.go:823] claim \"csi-mock-volumes-4394/pvc-rbv7l\" entered phase \"Bound\"\nI1010 15:50:23.943667       1 namespace_controller.go:185] Namespace has been deleted resourcequota-3377\nI1010 15:50:24.119037       1 namespace_controller.go:185] Namespace has been deleted provisioning-4714\nI1010 15:50:24.119452       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:24.139007       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4774^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:24.142136       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:24.146054       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4774^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:24.238455       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7457\nE1010 15:50:24.435486       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nI1010 15:50:24.442694       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4394^4\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:24.631494       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-8660/awss6pqk\"\nI1010 15:50:24.637790       1 pv_controller.go:640] volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:24.640319       1 pv_controller.go:879] volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" entered phase \"Released\"\nI1010 15:50:24.643481       1 pv_controller.go:1340] isVolumeReleased[pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc]: volume is released\nI1010 15:50:24.696568       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4774^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nE1010 15:50:24.800783       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:25.020200       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4394^4\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:25.020421       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4394/pvc-volume-tester-6vh49\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\\\" \"\nI1010 15:50:25.060021       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:25.063464       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:50:25.438880       1 pv_protection_controller.go:118] PV pvc-d4d470a6-485c-4e19-97d9-d228b21c582c failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-d4d470a6-485c-4e19-97d9-d228b21c582c\": the object has been modified; please apply your changes to the latest version and try again\nI1010 15:50:25.441244       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-4774/pvc-d8fh4\" was already processed\nI1010 15:50:25.790346       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457-8822/csi-hostpathplugin-0\" objectUID=eb88a5e1-e449-4233-b0de-57d85c2d24cd kind=\"Pod\" virtual=false\nI1010 15:50:25.790724       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-7457-8822/csi-hostpathplugin\nI1010 15:50:25.791043       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7457-8822/csi-hostpathplugin-77c67f4b8d\" objectUID=ad0472ab-9865-46e0-8ca0-1b8d092ae1a4 kind=\"ControllerRevision\" virtual=false\nI1010 15:50:25.793264       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7457-8822/csi-hostpathplugin-0\" objectUID=eb88a5e1-e449-4233-b0de-57d85c2d24cd kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:25.794183       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7457-8822/csi-hostpathplugin-77c67f4b8d\" objectUID=ad0472ab-9865-46e0-8ca0-1b8d092ae1a4 kind=\"ControllerRevision\" propagationPolicy=Background\nE1010 15:50:25.861531       1 namespace_controller.go:162] deletion of namespace containers-7588 failed: unexpected items still remain in namespace: containers-7588 for gvr: /v1, Resource=pods\nE1010 15:50:25.960536       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:26.512579       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-1681/default: secrets \"default-token-s2kwz\" is forbidden: unable to create new content in namespace container-probe-1681 because it is being terminated\nI1010 15:50:27.253933       1 garbagecollector.go:471] \"Processing object\" object=\"job-4122/suspend-true-to-false--1-57kkg\" objectUID=d7d10842-821c-4c95-8b7e-269a335e350c kind=\"Pod\" virtual=false\nI1010 15:50:27.254217       1 job_controller.go:406] enqueueing job job-4122/suspend-true-to-false\nI1010 15:50:27.254459       1 garbagecollector.go:471] \"Processing object\" object=\"job-4122/suspend-true-to-false--1-nqjxv\" objectUID=c2bb2095-48bf-4aef-b81d-5732314bf206 kind=\"Pod\" virtual=false\nI1010 15:50:27.254592       1 garbagecollector.go:471] \"Processing object\" object=\"job-4122/suspend-true-to-false--1-ggnr9\" objectUID=f0d3cef1-91b2-4f8f-b932-852b236de711 kind=\"Pod\" virtual=false\nI1010 15:50:27.254846       1 garbagecollector.go:471] \"Processing object\" object=\"job-4122/suspend-true-to-false--1-pjc64\" objectUID=3e29c70c-883e-461a-9adf-f6143b1d469a kind=\"Pod\" virtual=false\nI1010 15:50:27.257696       1 garbagecollector.go:580] \"Deleting object\" object=\"job-4122/suspend-true-to-false--1-57kkg\" objectUID=d7d10842-821c-4c95-8b7e-269a335e350c kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:27.258023       1 garbagecollector.go:580] \"Deleting object\" object=\"job-4122/suspend-true-to-false--1-pjc64\" objectUID=3e29c70c-883e-461a-9adf-f6143b1d469a kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:27.258067       1 garbagecollector.go:580] \"Deleting object\" object=\"job-4122/suspend-true-to-false--1-nqjxv\" objectUID=c2bb2095-48bf-4aef-b81d-5732314bf206 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:27.258277       1 garbagecollector.go:580] \"Deleting object\" object=\"job-4122/suspend-true-to-false--1-ggnr9\" objectUID=f0d3cef1-91b2-4f8f-b932-852b236de711 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:27.329041       1 namespace_controller.go:185] Namespace has been deleted events-1608\nI1010 15:50:27.769015       1 expand_controller.go:289] Ignoring the PVC \"volume-expand-1028/csi-hostpath6frrr\" (uid: \"b7ee2f91-45c3-428e-b167-76d642283487\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI1010 15:50:27.769175       1 event.go:291] \"Event occurred\" object=\"volume-expand-1028/csi-hostpath6frrr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE1010 15:50:28.163147       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-4222/pvc-wcqqc: storageclass.storage.k8s.io \"provisioning-4222\" not found\nI1010 15:50:28.163275       1 event.go:291] \"Event occurred\" object=\"provisioning-4222/pvc-wcqqc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4222\\\" not found\"\nI1010 15:50:28.310910       1 pv_controller.go:879] volume \"local-gdkfx\" entered phase \"Available\"\nI1010 15:50:28.367804       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7347/pvc-s8bc5\"\nI1010 15:50:28.374294       1 pv_controller.go:640] volume \"local-qndhz\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:28.377684       1 pv_controller.go:879] volume \"local-qndhz\" entered phase \"Released\"\nI1010 15:50:28.518187       1 pv_controller_base.go:505] deletion of claim \"provisioning-7347/pvc-s8bc5\" was already processed\nE1010 15:50:29.896163       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2093/pvc-lfvjl: storageclass.storage.k8s.io \"provisioning-2093\" not found\nI1010 15:50:29.896333       1 event.go:291] \"Event occurred\" object=\"provisioning-2093/pvc-lfvjl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2093\\\" not found\"\nI1010 15:50:30.042646       1 pv_controller.go:879] volume \"local-cnpbj\" entered phase \"Available\"\nI1010 15:50:30.749626       1 pv_controller.go:930] claim \"provisioning-2093/pvc-lfvjl\" bound to volume \"local-cnpbj\"\nI1010 15:50:30.753098       1 pv_controller.go:1340] isVolumeReleased[pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc]: volume is released\nI1010 15:50:30.758179       1 pv_controller.go:879] volume \"local-cnpbj\" entered phase \"Bound\"\nI1010 15:50:30.758208       1 pv_controller.go:982] volume \"local-cnpbj\" bound to claim \"provisioning-2093/pvc-lfvjl\"\nI1010 15:50:30.763494       1 pv_controller.go:823] claim \"provisioning-2093/pvc-lfvjl\" entered phase \"Bound\"\nI1010 15:50:30.763709       1 pv_controller.go:930] claim \"provisioning-5843/pvc-9brfx\" bound to volume \"local-6nt4s\"\nI1010 15:50:30.771248       1 pv_controller.go:879] volume \"local-6nt4s\" entered phase \"Bound\"\nI1010 15:50:30.771359       1 pv_controller.go:982] volume \"local-6nt4s\" bound to claim \"provisioning-5843/pvc-9brfx\"\nI1010 15:50:30.778628       1 pv_controller.go:823] claim \"provisioning-5843/pvc-9brfx\" entered phase \"Bound\"\nI1010 15:50:30.779270       1 pv_controller.go:930] claim \"provisioning-4222/pvc-wcqqc\" bound to volume \"local-gdkfx\"\nI1010 15:50:30.788073       1 pv_controller.go:879] volume \"local-gdkfx\" entered phase \"Bound\"\nI1010 15:50:30.788175       1 pv_controller.go:982] volume \"local-gdkfx\" bound to claim \"provisioning-4222/pvc-wcqqc\"\nI1010 15:50:30.793608       1 pv_controller.go:823] claim \"provisioning-4222/pvc-wcqqc\" entered phase \"Bound\"\nE1010 15:50:31.033785       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-1343/default: secrets \"default-token-tpjgb\" is forbidden: unable to create new content in namespace security-context-test-1343 because it is being terminated\nI1010 15:50:31.473988       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-3726-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-395-crds], removed: []\nI1010 15:50:31.474158       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-395-crds.crd-publish-openapi-test-common-group.example.com\nI1010 15:50:31.474231       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-3726-crds.crd-publish-openapi-test-common-group.example.com\nI1010 15:50:31.474301       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1010 15:50:31.569456       1 namespace_controller.go:185] Namespace has been deleted container-probe-1681\nI1010 15:50:31.574569       1 shared_informer.go:247] Caches are synced for resource quota \nI1010 15:50:31.574583       1 resource_quota_controller.go:454] synced quota controller\nI1010 15:50:31.769219       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-3726-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-395-crds], removed: []\nI1010 15:50:31.782146       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1010 15:50:31.782212       1 shared_informer.go:247] Caches are synced for garbage collector \nI1010 15:50:31.782223       1 garbagecollector.go:254] synced garbage collector\nE1010 15:50:31.783885       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-1135/pvc-s8br4: storageclass.storage.k8s.io \"volume-1135\" not found\nI1010 15:50:31.784292       1 event.go:291] \"Event occurred\" object=\"volume-1135/pvc-s8br4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1135\\\" not found\"\nI1010 15:50:31.818287       1 pv_controller.go:1340] isVolumeReleased[pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc]: volume is released\nI1010 15:50:31.936048       1 pv_controller.go:879] volume \"aws-j97lg\" entered phase \"Available\"\nI1010 15:50:31.964066       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-8660/awss6pqk\" was already processed\nI1010 15:50:32.073701       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-93ad28a7-20f7-4c57-8c74-2a59d3aecbfc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f605956eedeb0544\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:50:32.187654       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4774/default: secrets \"default-token-dcjqj\" is forbidden: unable to create new content in namespace csi-mock-volumes-4774 because it is being terminated\nI1010 15:50:32.446918       1 namespace_controller.go:185] Namespace has been deleted job-4122\nE1010 15:50:33.551090       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:33.559745       1 namespace_controller.go:185] Namespace has been deleted containers-7588\nI1010 15:50:34.019908       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/verify-service-up-exec-pod-k84hj\" objectUID=67d5c3cd-353b-4293-979b-b1387a797965 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:34.027564       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/verify-service-up-exec-pod-k84hj\" objectUID=67d5c3cd-353b-4293-979b-b1387a797965 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:50:34.074273       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:34.361046       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:34.461948       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:34.483640       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:34.632534       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:34.853594       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:35.004931       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7960/pvc-zsnq2: storageclass.storage.k8s.io \"provisioning-7960\" not found\nI1010 15:50:35.005356       1 event.go:291] \"Event occurred\" object=\"provisioning-7960/pvc-zsnq2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7960\\\" not found\"\nE1010 15:50:35.055682       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7347/default: secrets \"default-token-8zjkd\" is forbidden: unable to create new content in namespace provisioning-7347 because it is being terminated\nI1010 15:50:35.163299       1 pv_controller.go:879] volume \"local-9wq86\" entered phase \"Available\"\nE1010 15:50:35.201546       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:35.300552       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-1338/default: secrets \"default-token-vrskf\" is forbidden: unable to create new content in namespace secrets-1338 because it is being terminated\nI1010 15:50:35.437394       1 stateful_set_control.go:555] StatefulSet statefulset-2594/ss2 terminating Pod ss2-1 for update\nI1010 15:50:35.448105       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nE1010 15:50:35.534776       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:36.047727       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nI1010 15:50:36.199443       1 namespace_controller.go:185] Namespace has been deleted security-context-test-1343\nI1010 15:50:36.299669       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-642f4eda-2526-4fbd-9ce5-34a65d6e54e9\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:36.299693       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:36.320475       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7457-8822\nE1010 15:50:36.871337       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:36.925583       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:50:37.069205       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:37.081444       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nE1010 15:50:37.208754       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:37.299669       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4774\nI1010 15:50:37.576220       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4774-3147/csi-mockplugin\nI1010 15:50:37.576391       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-0\" objectUID=88c77a6f-445a-4db7-8ce8-1795f8d68f26 kind=\"Pod\" virtual=false\nI1010 15:50:37.576719       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-77c688bc75\" objectUID=f719f65d-778d-47ce-a1ff-3f0bb5d1bc1e kind=\"ControllerRevision\" virtual=false\nI1010 15:50:37.581423       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-0\" objectUID=88c77a6f-445a-4db7-8ce8-1795f8d68f26 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:37.581809       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-77c688bc75\" objectUID=f719f65d-778d-47ce-a1ff-3f0bb5d1bc1e kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:37.933548       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4774-3147/csi-mockplugin-attacher\nI1010 15:50:37.933697       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-attacher-0\" objectUID=fa192c0a-c59d-4d3f-9e3a-3862f536bb15 kind=\"Pod\" virtual=false\nI1010 15:50:37.934021       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-attacher-55f7cfb54f\" objectUID=3ea075b8-ebfe-45b6-abf0-b8b7bb3830b6 kind=\"ControllerRevision\" virtual=false\nI1010 15:50:37.943788       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-attacher-55f7cfb54f\" objectUID=3ea075b8-ebfe-45b6-abf0-b8b7bb3830b6 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:37.944239       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4774-3147/csi-mockplugin-attacher-0\" objectUID=fa192c0a-c59d-4d3f-9e3a-3862f536bb15 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:38.185954       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE1010 15:50:38.265081       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:38.458220       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:38.643242       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-2166/default: secrets \"default-token-g6zng\" is forbidden: unable to create new content in namespace kubectl-2166 because it is being terminated\nI1010 15:50:38.778231       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-642f4eda-2526-4fbd-9ce5-34a65d6e54e9\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:38.778258       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:39.180105       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-4a5f5875-6734-429c-ba64-e50be85dd1a5\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:39.180145       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:39.186545       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-4a5f5875-6734-429c-ba64-e50be85dd1a5\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:39.186682       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nE1010 15:50:39.446049       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:39.979344       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-4a5f5875-6734-429c-ba64-e50be85dd1a5\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:39.979373       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:40.056420       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-3401/awsvqlkh\"\nI1010 15:50:40.062785       1 pv_controller.go:640] volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:40.065828       1 pv_controller.go:879] volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" entered phase \"Released\"\nI1010 15:50:40.072106       1 pv_controller.go:1340] isVolumeReleased[pvc-25f038ca-8dba-4753-896d-a810200b92b0]: volume is released\nI1010 15:50:40.143705       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5843/pvc-9brfx\"\nI1010 15:50:40.150446       1 pv_controller.go:640] volume \"local-6nt4s\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:40.153528       1 pv_controller.go:879] volume \"local-6nt4s\" entered phase \"Released\"\nI1010 15:50:40.244306       1 namespace_controller.go:185] Namespace has been deleted provisioning-7347\nI1010 15:50:40.289351       1 pv_controller_base.go:505] deletion of claim \"provisioning-5843/pvc-9brfx\" was already processed\nI1010 15:50:40.311738       1 namespace_controller.go:185] Namespace has been deleted projected-4853\nI1010 15:50:40.378233       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4324/pod-4a5f5875-6734-429c-ba64-e50be85dd1a5\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:40.378382       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:40.383196       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4324/pvc-z484l\"\nI1010 15:50:40.388914       1 pv_controller.go:640] volume \"local-pvzg2j7\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:40.394191       1 pv_controller.go:879] volume \"local-pvzg2j7\" entered phase \"Released\"\nI1010 15:50:40.398019       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-4324/pvc-z484l\" was already processed\nI1010 15:50:40.427311       1 namespace_controller.go:185] Namespace has been deleted secrets-1338\nE1010 15:50:40.534027       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-8660/default: secrets \"default-token-xpx42\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-8660 because it is being terminated\nE1010 15:50:41.128177       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:41.258110       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:41.272234       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9906/e2e-test-webhook-vnsbh\" objectUID=39023b78-e8c4-4316-a220-2cbfa55c60bd kind=\"EndpointSlice\" virtual=false\nI1010 15:50:41.275122       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9906/e2e-test-webhook-vnsbh\" objectUID=39023b78-e8c4-4316-a220-2cbfa55c60bd kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:50:41.421472       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd\" objectUID=70f91af4-375d-4ddc-ad25-b54231aa21ef kind=\"ReplicaSet\" virtual=false\nI1010 15:50:41.421866       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-9906/sample-webhook-deployment\"\nI1010 15:50:41.423852       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd\" objectUID=70f91af4-375d-4ddc-ad25-b54231aa21ef kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:50:41.427933       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd-qj8c7\" objectUID=a0f29dd6-39b6-4046-b7b8-b07dbd577e09 kind=\"Pod\" virtual=false\nI1010 15:50:41.429457       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd-qj8c7\" objectUID=a0f29dd6-39b6-4046-b7b8-b07dbd577e09 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:41.434804       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd-qj8c7\" objectUID=ba06cb41-30be-41bc-96e3-7811dfa206fa kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:41.437161       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9906/sample-webhook-deployment-78988fc6cd-qj8c7\" objectUID=ba06cb41-30be-41bc-96e3-7811dfa206fa kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:50:41.563758       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-061df3e74848d3faf\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:41.564246       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6e8c4e5b-6bfe-49e6-889e-9af93e2c84c6\\\" \"\nE1010 15:50:41.828263       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:41.882113       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1010 15:50:42.027406       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1010 15:50:42.660791       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2093/pvc-lfvjl\"\nI1010 15:50:42.665410       1 pv_controller.go:640] volume \"local-cnpbj\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:42.668850       1 pv_controller.go:879] volume \"local-cnpbj\" entered phase \"Released\"\nI1010 15:50:42.807992       1 pv_controller_base.go:505] deletion of claim \"provisioning-2093/pvc-lfvjl\" was already processed\nI1010 15:50:42.990542       1 garbagecollector.go:471] \"Processing object\" object=\"container-runtime-1630/image-pull-testf33d90f4-7d28-4a69-8f7e-6d922efb3f0d\" objectUID=6c7ea877-c5c3-44ba-9979-a6079f308e66 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:42.993279       1 garbagecollector.go:580] \"Deleting object\" object=\"container-runtime-1630/image-pull-testf33d90f4-7d28-4a69-8f7e-6d922efb3f0d\" objectUID=6c7ea877-c5c3-44ba-9979-a6079f308e66 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:50:43.138190       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:43.260546       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:50:43.261008       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nE1010 15:50:43.265680       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-4324/default: secrets \"default-token-2hv94\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4324 because it is being terminated\nI1010 15:50:43.270829       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI1010 15:50:43.283243       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE1010 15:50:43.458826       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4774-3147/default: secrets \"default-token-lnng4\" is forbidden: unable to create new content in namespace csi-mock-volumes-4774-3147 because it is being terminated\nI1010 15:50:43.692829       1 namespace_controller.go:185] Namespace has been deleted kubectl-2166\nI1010 15:50:43.787681       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-4394/pvc-rbv7l\"\nI1010 15:50:43.793828       1 pv_controller.go:640] volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:43.797243       1 pv_controller.go:879] volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" entered phase \"Released\"\nI1010 15:50:43.802111       1 pv_controller.go:1340] isVolumeReleased[pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef]: volume is released\nI1010 15:50:43.811086       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-1028/csi-hostpath6frrr\"\nI1010 15:50:43.816320       1 pv_controller.go:640] volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:43.819505       1 pv_controller.go:879] volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" entered phase \"Released\"\nI1010 15:50:43.821207       1 pv_controller.go:1340] isVolumeReleased[pvc-b7ee2f91-45c3-428e-b167-76d642283487]: volume is released\nI1010 15:50:43.829096       1 pv_controller_base.go:505] deletion of claim \"volume-expand-1028/csi-hostpath6frrr\" was already processed\nI1010 15:50:44.009544       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-8136/agnhost-primary\" need=1 creating=1\nI1010 15:50:44.014503       1 event.go:291] \"Event occurred\" object=\"kubectl-8136/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-2pt2w\"\nI1010 15:50:45.065712       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1028^bf1dc328-29e1-11ec-9d2b-3e943e804a0c\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:45.067625       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1028^bf1dc328-29e1-11ec-9d2b-3e943e804a0c\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:45.068964       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b7ee2f91-45c3-428e-b167-76d642283487\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-1028^bf1dc328-29e1-11ec-9d2b-3e943e804a0c\") on node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:50:45.375240       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-1728/service-proxy-disabled\" need=3 creating=3\nI1010 15:50:45.380240       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-dpqr9\"\nI1010 15:50:45.393542       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-24xbb\"\nI1010 15:50:45.393569       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-nv78m\"\nI1010 15:50:45.678119       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-8660\nE1010 15:50:45.689358       1 tokens_controller.go:262] error synchronizing serviceaccount services-3513/default: secrets \"default-token-jr9tq\" is forbidden: unable to create new content in namespace services-3513 because it is being terminated\nI1010 15:50:45.707671       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-7zpnv\" objectUID=323d5dc8-4608-407e-9956-496a23ef7528 kind=\"Pod\" virtual=false\nI1010 15:50:45.707879       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-mfssm\" objectUID=d09793db-6649-43bd-9569-9b0da2bd1525 kind=\"Pod\" virtual=false\nI1010 15:50:45.707951       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-8l8bt\" objectUID=7b08de6a-0287-4c49-8b7c-bbe0c553581a kind=\"Pod\" virtual=false\nI1010 15:50:45.710247       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-7zpnv\" objectUID=323d5dc8-4608-407e-9956-496a23ef7528 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:45.710537       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-mfssm\" objectUID=d09793db-6649-43bd-9569-9b0da2bd1525 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:45.710869       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-8l8bt\" objectUID=7b08de6a-0287-4c49-8b7c-bbe0c553581a kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:45.713973       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-toggled-rrgwg\" objectUID=77f1ffea-f2ec-4dff-b153-c3d9007b4bf6 kind=\"Pod\" virtual=false\nI1010 15:50:45.714283       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-toggled-5g5j4\" objectUID=b259aab7-0be3-4a12-b975-7d0c90666ac1 kind=\"Pod\" virtual=false\nI1010 15:50:45.714544       1 garbagecollector.go:471] \"Processing object\" object=\"services-3513/service-headless-toggled-8m9qd\" objectUID=d6091da0-2613-45f5-9d0f-a546e2e1742f kind=\"Pod\" virtual=false\nW1010 15:50:45.718244       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:50:45.724158       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-toggled-5g5j4\" objectUID=b259aab7-0be3-4a12-b975-7d0c90666ac1 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:45.724331       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-toggled-rrgwg\" objectUID=77f1ffea-f2ec-4dff-b153-c3d9007b4bf6 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:45.724389       1 garbagecollector.go:580] \"Deleting object\" object=\"services-3513/service-headless-toggled-8m9qd\" objectUID=d6091da0-2613-45f5-9d0f-a546e2e1742f kind=\"Pod\" propagationPolicy=Background\nW1010 15:50:45.727238       1 utils.go:265] Service services-3513/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI1010 15:50:45.751860       1 pv_controller.go:930] claim \"volume-1135/pvc-s8br4\" bound to volume \"aws-j97lg\"\nI1010 15:50:45.758571       1 pv_controller.go:1340] isVolumeReleased[pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef]: volume is released\nI1010 15:50:45.758573       1 pv_controller.go:1340] isVolumeReleased[pvc-25f038ca-8dba-4753-896d-a810200b92b0]: volume is released\nI1010 15:50:45.774993       1 pv_controller.go:879] volume \"aws-j97lg\" entered phase \"Bound\"\nI1010 15:50:45.775017       1 pv_controller.go:982] volume \"aws-j97lg\" bound to claim \"volume-1135/pvc-s8br4\"\nI1010 15:50:45.785421       1 pv_controller.go:823] claim \"volume-1135/pvc-s8br4\" entered phase \"Bound\"\nI1010 15:50:45.786339       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:50:45.786560       1 pv_controller.go:930] claim \"provisioning-7960/pvc-zsnq2\" bound to volume \"local-9wq86\"\nI1010 15:50:45.799076       1 pv_controller.go:879] volume \"local-9wq86\" entered phase \"Bound\"\nI1010 15:50:45.799196       1 pv_controller.go:982] volume \"local-9wq86\" bound to claim \"provisioning-7960/pvc-zsnq2\"\nI1010 15:50:45.809650       1 pv_controller.go:823] claim \"provisioning-7960/pvc-zsnq2\" entered phase \"Bound\"\nE1010 15:50:45.833030       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nE1010 15:50:45.960059       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:45.989690       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4394^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:46.000272       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:46.007515       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4394^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:46.007723       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nE1010 15:50:46.061858       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-9906/default: secrets \"default-token-fbmj5\" is forbidden: unable to create new content in namespace webhook-9906 because it is being terminated\nI1010 15:50:46.147264       1 garbagecollector.go:471] \"Processing object\" object=\"services-5131/execpod-affinitykqcqc\" objectUID=69e23a8a-0030-40b4-895c-f6abcb27c8a9 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:46.153964       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5131/execpod-affinitykqcqc\" objectUID=69e23a8a-0030-40b4-895c-f6abcb27c8a9 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:50:46.202521       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:46.337241       1 stateful_set_control.go:555] StatefulSet statefulset-2594/ss2 terminating Pod ss2-0 for update\nI1010 15:50:46.361944       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nE1010 15:50:46.442513       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:46.555953       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ca38fde3-4998-43ac-a84d-ea029ddeabef\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4394^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:46.595966       1 garbagecollector.go:471] \"Processing object\" object=\"services-5131/affinity-nodeport-transition-gd66d\" objectUID=f1fb3547-4284-4490-9360-deb177efde6a kind=\"Pod\" virtual=false\nI1010 15:50:46.597286       1 garbagecollector.go:471] \"Processing object\" object=\"services-5131/affinity-nodeport-transition-srw86\" objectUID=90f87b75-be8e-4c43-9f50-88a5b995ce70 kind=\"Pod\" virtual=false\nI1010 15:50:46.597312       1 garbagecollector.go:471] \"Processing object\" object=\"services-5131/affinity-nodeport-transition-gszw4\" objectUID=34636d6d-fa6c-445c-993f-0c9e445c4187 kind=\"Pod\" virtual=false\nI1010 15:50:46.616467       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5131/affinity-nodeport-transition-srw86\" objectUID=90f87b75-be8e-4c43-9f50-88a5b995ce70 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:46.616858       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5131/affinity-nodeport-transition-gd66d\" objectUID=f1fb3547-4284-4490-9360-deb177efde6a kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:46.617418       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5131/affinity-nodeport-transition-gszw4\" objectUID=34636d6d-fa6c-445c-993f-0c9e445c4187 kind=\"Pod\" propagationPolicy=Background\nW1010 15:50:46.679904       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-5131/affinity-nodeport-transition\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:50:46.802482       1 pv_controller.go:879] volume \"pvc-fd094efd-87ac-4e60-8426-41ceb6029f14\" entered phase \"Bound\"\nI1010 15:50:46.802517       1 pv_controller.go:982] volume \"pvc-fd094efd-87ac-4e60-8426-41ceb6029f14\" bound to claim \"statefulset-2611/datadir-ss-0\"\nI1010 15:50:46.825350       1 pv_controller.go:823] claim \"statefulset-2611/datadir-ss-0\" entered phase \"Bound\"\nI1010 15:50:46.836634       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-4394/pvc-rbv7l\" was already processed\nE1010 15:50:46.838754       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:50:46.860714       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nE1010 15:50:47.134187       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:47.328496       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fd094efd-87ac-4e60-8426-41ceb6029f14\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-08fd0838047ab3bda\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nE1010 15:50:47.367341       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:47.582350       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" need=3 creating=3\nI1010 15:50:47.583125       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-shared-map-item-removal-55649fd747 to 3\"\nI1010 15:50:47.588935       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-q84jx\"\nI1010 15:50:47.595526       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-4r4g8\"\nI1010 15:50:47.598181       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-7214/deployment-shared-map-item-removal\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-shared-map-item-removal\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:50:47.599094       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-gdxvs\"\nI1010 15:50:47.732614       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nE1010 15:50:47.747839       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:47.778545       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1010 15:50:47.793440       1 event.go:291] \"Event occurred\" object=\"provisioning-6865/awskn6gd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:50:47.870985       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-shared-map-item-removal-55649fd747 to 4\"\nI1010 15:50:47.871761       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" need=4 creating=1\nI1010 15:50:47.878458       1 event.go:291] \"Event occurred\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-ghw2r\"\nI1010 15:50:48.094215       1 event.go:291] \"Event occurred\" object=\"provisioning-6865/awskn6gd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:50:48.180481       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1105-1540/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1010 15:50:48.184521       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-690/startup-0fa999e8-0e81-4f71-ab0f-0e6734167fdc\" objectUID=01b749fe-5b06-4b6f-9544-bbd898583171 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:50:48.254111       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-690/startup-0fa999e8-0e81-4f71-ab0f-0e6734167fdc\" objectUID=01b749fe-5b06-4b6f-9544-bbd898583171 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:50:48.324629       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:48.574720       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4324\nI1010 15:50:48.589939       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4774-3147\nI1010 15:50:48.735294       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" objectUID=d46df3fd-1f1b-4e4e-ae72-03d44eb2b5c0 kind=\"ReplicaSet\" virtual=false\nI1010 15:50:48.735457       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-7214/deployment-shared-map-item-removal\"\nI1010 15:50:48.736928       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747\" objectUID=d46df3fd-1f1b-4e4e-ae72-03d44eb2b5c0 kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:50:48.739018       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-4r4g8\" objectUID=e32fe4b9-d7e5-4527-948c-54defe5f3cbd kind=\"Pod\" virtual=false\nI1010 15:50:48.739270       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-gdxvs\" objectUID=fc1c10ee-fa5b-4192-a769-5d87c8d397ef kind=\"Pod\" virtual=false\nI1010 15:50:48.739497       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-ghw2r\" objectUID=9382fdbd-c4b0-4fa5-b53b-89313986f5ff kind=\"Pod\" virtual=false\nI1010 15:50:48.739853       1 garbagecollector.go:471] \"Processing object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-q84jx\" objectUID=d177ad63-4d75-49cf-937c-140cb469e830 kind=\"Pod\" virtual=false\nI1010 15:50:48.742062       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-ghw2r\" objectUID=9382fdbd-c4b0-4fa5-b53b-89313986f5ff kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:48.743317       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-gdxvs\" objectUID=fc1c10ee-fa5b-4192-a769-5d87c8d397ef kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:48.743563       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-4r4g8\" objectUID=e32fe4b9-d7e5-4527-948c-54defe5f3cbd kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:48.743865       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-7214/deployment-shared-map-item-removal-55649fd747-q84jx\" objectUID=d177ad63-4d75-49cf-937c-140cb469e830 kind=\"Pod\" propagationPolicy=Background\nE1010 15:50:49.085288       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:49.702596       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fd094efd-87ac-4e60-8426-41ceb6029f14\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-08fd0838047ab3bda\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:50:49.702757       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fd094efd-87ac-4e60-8426-41ceb6029f14\\\" \"\nI1010 15:50:50.107265       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:50.107632       1 event.go:291] \"Event occurred\" object=\"volume-1135/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-j97lg\\\" \"\nE1010 15:50:50.290540       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:50.423296       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:50.515137       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:51.342470       1 namespace_controller.go:185] Namespace has been deleted webhook-9906\nI1010 15:50:51.524797       1 pv_controller.go:879] volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" entered phase \"Bound\"\nI1010 15:50:51.524833       1 pv_controller.go:982] volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" bound to claim \"provisioning-6865/awskn6gd\"\nI1010 15:50:51.534522       1 pv_controller.go:823] claim \"provisioning-6865/awskn6gd\" entered phase \"Bound\"\nI1010 15:50:51.723298       1 namespace_controller.go:185] Namespace has been deleted webhook-9906-markers\nE1010 15:50:51.815883       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:50:51.909850       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2093/default: secrets \"default-token-5jgp7\" is forbidden: unable to create new content in namespace provisioning-2093 because it is being terminated\nI1010 15:50:52.108550       1 namespace_controller.go:185] Namespace has been deleted provisioning-5843\nI1010 15:50:52.195752       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05b33fa8195d49a83\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:52.238541       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-6304\nI1010 15:50:52.388146       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6047/pvc-5wzlf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6047\\\" or manually created by system administrator\"\nI1010 15:50:52.390064       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6047/pvc-5wzlf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6047\\\" or manually created by system administrator\"\nI1010 15:50:52.403873       1 pv_controller.go:879] volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" entered phase \"Bound\"\nI1010 15:50:52.403902       1 pv_controller.go:982] volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" bound to claim \"csi-mock-volumes-6047/pvc-5wzlf\"\nI1010 15:50:52.411562       1 pv_controller.go:823] claim \"csi-mock-volumes-6047/pvc-5wzlf\" entered phase \"Bound\"\nI1010 15:50:53.000377       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6047^4\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nE1010 15:50:53.269546       1 namespace_controller.go:162] deletion of namespace services-3513 failed: unexpected items still remain in namespace: services-3513 for gvr: /v1, Resource=pods\nI1010 15:50:53.442514       1 namespace_controller.go:185] Namespace has been deleted container-runtime-1630\nI1010 15:50:53.554602       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6047^4\") from node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:53.555024       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6047/pvc-volume-tester-qdfj2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\\\" \"\nI1010 15:50:54.496338       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1028\nI1010 15:50:54.521624       1 namespace_controller.go:185] Namespace has been deleted node-problem-detector-3754\nI1010 15:50:54.615223       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05b33fa8195d49a83\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:50:54.615538       1 event.go:291] \"Event occurred\" object=\"provisioning-6865/pod-subpath-test-dynamicpv-bh5n\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\\\" \"\nI1010 15:50:54.732991       1 stateful_set_control.go:555] StatefulSet statefulset-6702/ss terminating Pod ss-0 for update\nI1010 15:50:54.747104       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nE1010 15:50:54.915535       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:50:55.909673       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-1028-5375/csi-hostpathplugin\nI1010 15:50:55.909686       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1028-5375/csi-hostpathplugin-7f794b8fb9\" objectUID=a0264c5c-524f-4e37-b23a-9e7f29cffb32 kind=\"ControllerRevision\" virtual=false\nI1010 15:50:55.909724       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1028-5375/csi-hostpathplugin-0\" objectUID=5a623e4f-0e35-4821-b877-ad360ef5a166 kind=\"Pod\" virtual=false\nI1010 15:50:55.912189       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1028-5375/csi-hostpathplugin-0\" objectUID=5a623e4f-0e35-4821-b877-ad360ef5a166 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:55.912659       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1028-5375/csi-hostpathplugin-7f794b8fb9\" objectUID=a0264c5c-524f-4e37-b23a-9e7f29cffb32 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:56.811998       1 namespace_controller.go:185] Namespace has been deleted events-5965\nI1010 15:50:56.839846       1 garbagecollector.go:471] \"Processing object\" object=\"services-5131/affinity-nodeport-transition-vsftb\" objectUID=5fd24a45-f028-4274-a6f0-56d9a52f9b0b kind=\"EndpointSlice\" virtual=false\nI1010 15:50:56.848183       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5131/affinity-nodeport-transition-vsftb\" objectUID=5fd24a45-f028-4274-a6f0-56d9a52f9b0b kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:50:56.948151       1 namespace_controller.go:185] Namespace has been deleted provisioning-2093\nI1010 15:50:56.996562       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4222/pvc-wcqqc\"\nI1010 15:50:57.002221       1 pv_controller.go:640] volume \"local-gdkfx\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:50:57.004900       1 pv_controller.go:879] volume \"local-gdkfx\" entered phase \"Released\"\nI1010 15:50:57.142150       1 pv_controller_base.go:505] deletion of claim \"provisioning-4222/pvc-wcqqc\" was already processed\nE1010 15:50:57.204206       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nI1010 15:50:57.392265       1 stateful_set_control.go:521] StatefulSet statefulset-2594/ss2 terminating Pod ss2-2 for scale down\nI1010 15:50:57.398533       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI1010 15:50:57.968563       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-1728/service-proxy-toggled\" need=3 creating=3\nI1010 15:50:57.973712       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-vztn5\"\nI1010 15:50:57.981058       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-h6wxt\"\nI1010 15:50:57.981581       1 event.go:291] \"Event occurred\" object=\"services-1728/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-hrktm\"\nI1010 15:50:58.557817       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-5498/pvc-mxl2l\"\nI1010 15:50:58.564363       1 pv_controller.go:640] volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:50:58.568069       1 pv_controller.go:879] volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" entered phase \"Released\"\nI1010 15:50:58.570439       1 pv_controller.go:1340] isVolumeReleased[pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8]: volume is released\nI1010 15:50:58.680906       1 namespace_controller.go:185] Namespace has been deleted container-probe-690\nI1010 15:50:58.706235       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4394\nI1010 15:50:58.896412       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-59c6bb6c46\" objectUID=f292cbce-717f-4469-aac1-86bde2e5657e kind=\"ControllerRevision\" virtual=false\nI1010 15:50:58.896584       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4394-8264/csi-mockplugin\nI1010 15:50:58.896670       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-0\" objectUID=b91309db-dffd-40b9-9e51-64d93b5f6065 kind=\"Pod\" virtual=false\nI1010 15:50:58.899979       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-59c6bb6c46\" objectUID=f292cbce-717f-4469-aac1-86bde2e5657e kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:58.900342       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-0\" objectUID=b91309db-dffd-40b9-9e51-64d93b5f6065 kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:58.993743       1 pv_controller.go:1340] isVolumeReleased[pvc-25f038ca-8dba-4753-896d-a810200b92b0]: volume is released\nI1010 15:50:59.041315       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-attacher-5b74954b68\" objectUID=7b520a5e-3fe7-40f1-acb3-cb238e36bec1 kind=\"ControllerRevision\" virtual=false\nI1010 15:50:59.041857       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4394-8264/csi-mockplugin-attacher\nI1010 15:50:59.042034       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-attacher-0\" objectUID=2fd01e6d-08e0-46c8-9ba6-6d9b4f87a5ee kind=\"Pod\" virtual=false\nI1010 15:50:59.043340       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-attacher-5b74954b68\" objectUID=7b520a5e-3fe7-40f1-acb3-cb238e36bec1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:50:59.043595       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4394-8264/csi-mockplugin-attacher-0\" objectUID=2fd01e6d-08e0-46c8-9ba6-6d9b4f87a5ee kind=\"Pod\" propagationPolicy=Background\nI1010 15:50:59.186930       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-3401/awsvqlkh\" was already processed\nI1010 15:50:59.191360       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-25f038ca-8dba-4753-896d-a810200b92b0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07ac28e0c44dc26ae\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:50:59.664024       1 namespace_controller.go:185] Namespace has been deleted apparmor-9203\nI1010 15:51:00.324532       1 event.go:291] \"Event occurred\" object=\"topology-1406/pvc-nzwvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:00.324558       1 event.go:291] \"Event occurred\" object=\"topology-1406/pvc-nzwvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:00.431864       1 event.go:291] \"Event occurred\" object=\"statefulset-6702/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI1010 15:51:00.628781       1 pv_controller.go:879] volume \"local-pv47r85\" entered phase \"Available\"\nI1010 15:51:00.667164       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7960/pvc-zsnq2\"\nI1010 15:51:00.676040       1 pv_controller.go:640] volume \"local-9wq86\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:00.680399       1 pv_controller.go:879] volume \"local-9wq86\" entered phase \"Released\"\nI1010 15:51:00.751045       1 event.go:291] \"Event occurred\" object=\"topology-1406/pvc-nzwvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:00.752403       1 pv_controller.go:1340] isVolumeReleased[pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8]: volume is released\nI1010 15:51:00.767459       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4000/pvc-sfxwd\" bound to volume \"local-pv47r85\"\nI1010 15:51:00.776221       1 pv_controller.go:879] volume \"local-pv47r85\" entered phase \"Bound\"\nI1010 15:51:00.776250       1 pv_controller.go:982] volume \"local-pv47r85\" bound to claim \"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:00.784676       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4000/pvc-sfxwd\" entered phase \"Bound\"\nI1010 15:51:00.812366       1 pv_controller_base.go:505] deletion of claim \"provisioning-7960/pvc-zsnq2\" was already processed\nE1010 15:51:01.259457       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-1028-5375/default: secrets \"default-token-lrtsr\" is forbidden: unable to create new content in namespace volume-expand-1028-5375 because it is being terminated\nI1010 15:51:01.602492       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-3726-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-395-crds]\nI1010 15:51:01.602568       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1010 15:51:01.602609       1 shared_informer.go:247] Caches are synced for resource quota \nI1010 15:51:01.602646       1 resource_quota_controller.go:454] synced quota controller\nI1010 15:51:01.801725       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-3726-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-395-crds]\nI1010 15:51:01.801934       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1010 15:51:01.803631       1 shared_informer.go:247] Caches are synced for garbage collector \nI1010 15:51:01.803647       1 garbagecollector.go:254] synced garbage collector\nE1010 15:51:02.333368       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:51:02.732302       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-8261/agnhost-primary\" need=1 creating=1\nI1010 15:51:02.737675       1 event.go:291] \"Event occurred\" object=\"kubectl-8261/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-bk7wn\"\nI1010 15:51:03.095383       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8136/agnhost-primary-2pt2w\" objectUID=e8b91940-1b12-46e3-8d0f-3bc39bbfe938 kind=\"Pod\" virtual=false\nI1010 15:51:03.117901       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8136/agnhost-primary-2pt2w\" objectUID=e8b91940-1b12-46e3-8d0f-3bc39bbfe938 kind=\"Pod\" propagationPolicy=Background\nE1010 15:51:03.138517       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-8136/default: secrets \"default-token-pzr7r\" is forbidden: unable to create new content in namespace kubectl-8136 because it is being terminated\nI1010 15:51:03.157749       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8136/agnhost-primary-cplv6\" objectUID=1301fcb8-c1c1-4dbc-b8e5-af3fa15412d4 kind=\"EndpointSlice\" virtual=false\nI1010 15:51:03.161346       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8136/agnhost-primary-cplv6\" objectUID=1301fcb8-c1c1-4dbc-b8e5-af3fa15412d4 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:51:03.539881       1 namespace_controller.go:185] Namespace has been deleted services-3513\nI1010 15:51:03.762966       1 pv_controller.go:879] volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" entered phase \"Bound\"\nI1010 15:51:03.762996       1 pv_controller.go:982] volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" bound to claim \"topology-1406/pvc-nzwvw\"\nI1010 15:51:03.769992       1 pv_controller.go:823] claim \"topology-1406/pvc-nzwvw\" entered phase \"Bound\"\nI1010 15:51:04.196889       1 stateful_set_control.go:521] StatefulSet statefulset-2594/ss2 terminating Pod ss2-1 for scale down\nW1010 15:51:04.201790       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-2594/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1010 15:51:04.217329       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nE1010 15:51:04.254292       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4394-8264/default: secrets \"default-token-lsn77\" is forbidden: unable to create new content in namespace csi-mock-volumes-4394-8264 because it is being terminated\nI1010 15:51:04.428145       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5498^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:04.430815       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5498^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:04.830977       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0555d632762a31487\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:04.973187       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-49fa8fdf-60b8-42c2-859e-576cf5846ae8\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5498^4\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:05.593043       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-5498/pvc-mxl2l\" was already processed\nI1010 15:51:05.603305       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6865/awskn6gd\"\nI1010 15:51:05.626116       1 pv_controller.go:640] volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:51:05.630372       1 pv_controller.go:879] volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" entered phase \"Released\"\nI1010 15:51:05.631809       1 pv_controller.go:1340] isVolumeReleased[pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791]: volume is released\nI1010 15:51:05.728018       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-7192\nI1010 15:51:05.881818       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1105/pvc-4vzqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:06.037205       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1105/pvc-4vzqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1105\\\" or manually created by system administrator\"\nI1010 15:51:06.040275       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1105/pvc-4vzqk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1105\\\" or manually created by system administrator\"\nI1010 15:51:06.054094       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6047^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:51:06.057986       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6047^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:51:06.288547       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6047/pvc-5wzlf\"\nI1010 15:51:06.305689       1 event.go:291] \"Event occurred\" object=\"ephemeral-6236-9738/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1010 15:51:06.316249       1 pv_controller.go:640] volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:51:06.327220       1 pv_controller.go:879] volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" entered phase \"Released\"\nI1010 15:51:06.331678       1 pv_controller.go:1340] isVolumeReleased[pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454]: volume is released\nI1010 15:51:06.347824       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-6047/pvc-5wzlf\" was already processed\nI1010 15:51:06.354778       1 pv_controller.go:879] volume \"pvc-e027039b-20eb-4019-bd51-39fcc2469a5b\" entered phase \"Bound\"\nI1010 15:51:06.354864       1 pv_controller.go:982] volume \"pvc-e027039b-20eb-4019-bd51-39fcc2469a5b\" bound to claim \"csi-mock-volumes-1105/pvc-4vzqk\"\nI1010 15:51:06.368902       1 pv_controller.go:823] claim \"csi-mock-volumes-1105/pvc-4vzqk\" entered phase \"Bound\"\nI1010 15:51:06.404926       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1028-5375\nI1010 15:51:06.627389       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6e70a56c-b2ee-4710-b44b-2fd4ef0ff454\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6047^4\") on node \"ip-172-20-54-137.sa-east-1.compute.internal\" \nI1010 15:51:06.906531       1 stateful_set_control.go:521] StatefulSet statefulset-2594/ss2 terminating Pod ss2-0 for scale down\nI1010 15:51:06.911839       1 event.go:291] \"Event occurred\" object=\"statefulset-2594/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nE1010 15:51:07.079872       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-1232/pvc-2prtx: storageclass.storage.k8s.io \"provisioning-1232\" not found\nI1010 15:51:07.080122       1 event.go:291] \"Event occurred\" object=\"provisioning-1232/pvc-2prtx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1232\\\" not found\"\nI1010 15:51:07.225257       1 pv_controller.go:879] volume \"local-bdrj7\" entered phase \"Available\"\nI1010 15:51:07.230337       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0555d632762a31487\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:07.230479       1 event.go:291] \"Event occurred\" object=\"topology-1406/pod-53347ec5-c7d4-4dbb-bf82-ac0077c77048\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\\\" \"\nI1010 15:51:07.309765       1 namespace_controller.go:185] Namespace has been deleted services-5131\nI1010 15:51:07.968339       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-2594/ss2-5bbbc9fc94\" objectUID=991ec62b-89e1-41dd-81ac-0938715a6ee1 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:07.968660       1 stateful_set.go:440] StatefulSet has been deleted statefulset-2594/ss2\nI1010 15:51:07.968758       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-2594/ss2-677d6db895\" objectUID=17fa1af4-71d6-44fd-807b-65971e88bcf3 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:07.970272       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-2594/ss2-677d6db895\" objectUID=17fa1af4-71d6-44fd-807b-65971e88bcf3 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:07.971275       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-2594/ss2-5bbbc9fc94\" objectUID=991ec62b-89e1-41dd-81ac-0938715a6ee1 kind=\"ControllerRevision\" propagationPolicy=Background\nE1010 15:51:08.422294       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7960/default: secrets \"default-token-bsglz\" is forbidden: unable to create new content in namespace provisioning-7960 because it is being terminated\nI1010 15:51:08.786273       1 namespace_controller.go:185] Namespace has been deleted provisioning-4222\nE1010 15:51:09.209331       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-2509/default: secrets \"default-token-g5n67\" is forbidden: unable to create new content in namespace subpath-2509 because it is being terminated\nI1010 15:51:09.542666       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nW1010 15:51:09.608100       1 reconciler.go:335] Multi-Attach error for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-33-168.sa-east-1.compute.internal and can't be attached to another\nI1010 15:51:09.608261       1 event.go:291] \"Event occurred\" object=\"volume-1135/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"aws-j97lg\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1010 15:51:09.713074       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=3\nI1010 15:51:09.723162       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-8wmct\"\nI1010 15:51:09.727512       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-htb5t\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:09.739359       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nI1010 15:51:09.739684       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-5r9wh\"\nE1010 15:51:09.745141       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-htb5t\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.745514       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:09.749765       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nI1010 15:51:09.750281       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-t5vtg\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE1010 15:51:09.758561       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-t5vtg\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.758675       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:09.760003       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nE1010 15:51:09.760038       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-nx7x8\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.760074       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-nx7x8\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:09.768846       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:09.769979       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nE1010 15:51:09.770099       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-fmjqm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.770211       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-fmjqm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:09.810233       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:09.814926       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI1010 15:51:09.817569       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nE1010 15:51:09.817769       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-gxv2c\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.818058       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-gxv2c\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:09.897987       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:09.899699       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nE1010 15:51:09.899772       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-w6szb\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:09.899966       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-w6szb\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:10.060876       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=3 creating=1\nI1010 15:51:10.062768       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1309/condition-test\nE1010 15:51:10.062815       1 replica_set.go:536] sync \"replication-controller-1309/condition-test\" failed with pods \"condition-test-5sgtf\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1010 15:51:10.062860       1 event.go:291] \"Event occurred\" object=\"replication-controller-1309/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-5sgtf\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1010 15:51:11.120663       1 namespace_controller.go:185] Namespace has been deleted cronjob-2435\nE1010 15:51:11.440037       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:51:11.509281       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-3401\nE1010 15:51:12.602496       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5498/default: secrets \"default-token-pzhc5\" is forbidden: unable to create new content in namespace csi-mock-volumes-5498 because it is being terminated\nI1010 15:51:13.353573       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8126/httpd\" objectUID=98ca6311-8a55-41c4-a40f-94cf59d90443 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:13.384885       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8126/httpd\" objectUID=98ca6311-8a55-41c4-a40f-94cf59d90443 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:51:13.469029       1 namespace_controller.go:185] Namespace has been deleted provisioning-7960\nI1010 15:51:13.560839       1 event.go:291] \"Event occurred\" object=\"volume-expand-5205-506/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE1010 15:51:13.706283       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-2594/default: secrets \"default-token-kcsdq\" is forbidden: unable to create new content in namespace statefulset-2594 because it is being terminated\nI1010 15:51:13.989529       1 event.go:291] \"Event occurred\" object=\"volume-expand-5205/csi-hostpath7mpz9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5205\\\" or manually created by system administrator\"\nI1010 15:51:14.316444       1 namespace_controller.go:185] Namespace has been deleted subpath-2509\nI1010 15:51:14.502674       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:14.525038       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05b33fa8195d49a83\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:14.525480       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:14.529763       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05b33fa8195d49a83\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:14.567672       1 namespace_controller.go:185] Namespace has been deleted projected-6072\nI1010 15:51:14.575716       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-2732/test-rolling-update-with-lb-864fb64577\" need=3 creating=3\nI1010 15:51:14.576551       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-864fb64577 to 3\"\nI1010 15:51:14.596289       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-z6l6g\"\nI1010 15:51:14.596710       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2732/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:51:14.614802       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-dfq66\"\nI1010 15:51:14.620715       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-864fb64577-kr8x8\"\nI1010 15:51:14.636102       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4394-8264\nE1010 15:51:15.117047       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2127/default: secrets \"default-token-9mzwz\" is forbidden: unable to create new content in namespace provisioning-2127 because it is being terminated\nE1010 15:51:15.207790       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:51:15.741986       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4000/pod-a0f9b256-dbe7-4dbf-a763-99d35954fe88\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:15.742112       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:15.781243       1 pv_controller.go:930] claim \"provisioning-1232/pvc-2prtx\" bound to volume \"local-bdrj7\"\nI1010 15:51:15.788311       1 event.go:291] \"Event occurred\" object=\"volume-expand-5205/csi-hostpath7mpz9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5205\\\" or manually created by system administrator\"\nI1010 15:51:15.820625       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-8004/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1010 15:51:15.820802       1 event.go:291] \"Event occurred\" object=\"webhook-8004/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1010 15:51:15.849340       1 pv_controller.go:1340] isVolumeReleased[pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791]: volume is released\nI1010 15:51:15.916240       1 pv_controller.go:879] volume \"local-bdrj7\" entered phase \"Bound\"\nI1010 15:51:15.916354       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-8004/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1010 15:51:15.916671       1 pv_controller.go:982] volume \"local-bdrj7\" bound to claim \"provisioning-1232/pvc-2prtx\"\nI1010 15:51:15.955554       1 event.go:291] \"Event occurred\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-zv99t\"\nI1010 15:51:16.019160       1 pv_controller.go:823] claim \"provisioning-1232/pvc-2prtx\" entered phase \"Bound\"\nI1010 15:51:16.117338       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1309/condition-test\" need=2 creating=1\nE1010 15:51:16.128262       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-1309/default: secrets \"default-token-nm8jc\" is forbidden: unable to create new content in namespace replication-controller-1309 because it is being terminated\nE1010 15:51:16.139811       1 resource_quota_controller.go:253] Operation cannot be fulfilled on resourcequotas \"condition-test\": the object has been modified; please apply your changes to the latest version and try again\nI1010 15:51:17.105021       1 pv_controller.go:879] volume \"pvc-9a3b6318-9ab9-4e15-aa99-e2caea73815c\" entered phase \"Bound\"\nI1010 15:51:17.105522       1 pv_controller.go:982] volume \"pvc-9a3b6318-9ab9-4e15-aa99-e2caea73815c\" bound to claim \"volume-expand-5205/csi-hostpath7mpz9\"\nI1010 15:51:17.150154       1 pv_controller.go:823] claim \"volume-expand-5205/csi-hostpath7mpz9\" entered phase \"Bound\"\nI1010 15:51:17.153745       1 resource_quota_controller.go:307] Resource quota has been deleted replication-controller-1309/condition-test\nI1010 15:51:17.267918       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6047\nE1010 15:51:18.334094       1 namespace_controller.go:162] deletion of namespace disruption-8003 failed: unexpected items still remain in namespace: disruption-8003 for gvr: /v1, Resource=pods\nE1010 15:51:18.360832       1 namespace_controller.go:162] deletion of namespace kubectl-1423 failed: unexpected items still remain in namespace: kubectl-1423 for gvr: /v1, Resource=pods\nI1010 15:51:18.446068       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-8261/agnhost-primary-bk7wn\" objectUID=82fb4a30-35c4-4848-9a66-597b87105c1f kind=\"Pod\" virtual=false\nI1010 15:51:18.491851       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-8261/agnhost-primary-bk7wn\" objectUID=82fb4a30-35c4-4848-9a66-597b87105c1f kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:18.562901       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-84979785c9\" objectUID=73fab4a2-2e16-4a28-93e6-27cf71152e88 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:18.562992       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6047-8405/csi-mockplugin\nI1010 15:51:18.563329       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-0\" objectUID=a0a7ba69-5ff3-4429-a945-71640831af4c kind=\"Pod\" virtual=false\nI1010 15:51:18.567838       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-84979785c9\" objectUID=73fab4a2-2e16-4a28-93e6-27cf71152e88 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:18.568130       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-0\" objectUID=a0a7ba69-5ff3-4429-a945-71640831af4c kind=\"Pod\" propagationPolicy=Background\nE1010 15:51:18.670661       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-657/pvc-2t76k: storageclass.storage.k8s.io \"provisioning-657\" not found\nI1010 15:51:18.670733       1 event.go:291] \"Event occurred\" object=\"provisioning-657/pvc-2t76k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-657\\\" not found\"\nI1010 15:51:18.807940       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-attacher-65f8c6b877\" objectUID=63635e05-f80c-4713-a85a-de84d868a964 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:18.808050       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6047-8405/csi-mockplugin-attacher\nI1010 15:51:18.808350       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-attacher-0\" objectUID=67c2265d-86ae-4ed0-8f1d-aeaef82bfbe0 kind=\"Pod\" virtual=false\nI1010 15:51:18.835644       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5498\nI1010 15:51:18.835821       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-attacher-0\" objectUID=67c2265d-86ae-4ed0-8f1d-aeaef82bfbe0 kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:18.864130       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6047-8405/csi-mockplugin-attacher-65f8c6b877\" objectUID=63635e05-f80c-4713-a85a-de84d868a964 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:18.922996       1 pv_controller.go:879] volume \"local-xm2pz\" entered phase \"Available\"\nI1010 15:51:19.189204       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-9a3b6318-9ab9-4e15-aa99-e2caea73815c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5205^e78c1593-29e1-11ec-8801-961e1d957c3e\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:51:19.295239       1 event.go:291] \"Event occurred\" object=\"volumemode-5873/awsjrkl7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:19.351978       1 event.go:291] \"Event occurred\" object=\"volume-expand-8077/awssxt5p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:19.754910       1 event.go:291] \"Event occurred\" object=\"volumemode-5873/awsjrkl7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:19.855662       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-9a3b6318-9ab9-4e15-aa99-e2caea73815c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5205^e78c1593-29e1-11ec-8801-961e1d957c3e\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:51:19.856139       1 event.go:291] \"Event occurred\" object=\"volume-expand-5205/pod-e8419b4c-5c68-4f2c-b2b9-ac05ecaf9761\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9a3b6318-9ab9-4e15-aa99-e2caea73815c\\\" \"\nI1010 15:51:20.215643       1 namespace_controller.go:185] Namespace has been deleted statefulset-2594\nE1010 15:51:20.905119       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:20.936970       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5583/pvc-m8pbh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE1010 15:51:21.019265       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:21.063800       1 namespace_controller.go:185] Namespace has been deleted provisioning-2127\nE1010 15:51:21.126345       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:51:21.152704       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:21.172118       1 controller.go:400] Ensuring load balancer for service deployment-2732/test-rolling-update-with-lb\nI1010 15:51:21.172673       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI1010 15:51:21.172979       1 controller.go:901] Adding finalizer to service deployment-2732/test-rolling-update-with-lb\nI1010 15:51:21.204955       1 aws.go:3915] EnsureLoadBalancer(e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io, deployment-2732, test-rolling-update-with-lb, sa-east-1, , [{ TCP <nil> 80 {0 80 } 31607}], map[])\nE1010 15:51:21.309610       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:21.364609       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5498-644/csi-mockplugin\nI1010 15:51:21.364611       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-699f8bbf49\" objectUID=af4c8df0-8a3f-46a6-a8c5-d6398cec140b kind=\"ControllerRevision\" virtual=false\nI1010 15:51:21.364651       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-0\" objectUID=fc31257f-da0d-4c38-9158-0a30cb4c00b2 kind=\"Pod\" virtual=false\nI1010 15:51:21.366894       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-699f8bbf49\" objectUID=af4c8df0-8a3f-46a6-a8c5-d6398cec140b kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:21.369597       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-0\" objectUID=fc31257f-da0d-4c38-9158-0a30cb4c00b2 kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:21.510892       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-attacher-0\" objectUID=eca6bc75-447c-4d5e-9db9-576401cc8cdf kind=\"Pod\" virtual=false\nI1010 15:51:21.511288       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5498-644/csi-mockplugin-attacher\nI1010 15:51:21.511472       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-attacher-95d65fd47\" objectUID=c0748464-54ac-4d50-b529-fb77aa2ad803 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:21.515025       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-attacher-0\" objectUID=eca6bc75-447c-4d5e-9db9-576401cc8cdf kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:21.516621       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-attacher-95d65fd47\" objectUID=c0748464-54ac-4d50-b529-fb77aa2ad803 kind=\"ControllerRevision\" propagationPolicy=Background\nE1010 15:51:21.577270       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:21.659564       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-resizer-769d848796\" objectUID=ff8b4fcf-a463-4e33-95b4-014f745b901d kind=\"ControllerRevision\" virtual=false\nI1010 15:51:21.659991       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5498-644/csi-mockplugin-resizer\nI1010 15:51:21.660043       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-resizer-0\" objectUID=7f5fdc81-79ad-48ba-b2f1-9be6fe26ab8f kind=\"Pod\" virtual=false\nE1010 15:51:21.661226       1 tokens_controller.go:262] error synchronizing serviceaccount pods-7542/default: secrets \"default-token-8wmh8\" is forbidden: unable to create new content in namespace pods-7542 because it is being terminated\nI1010 15:51:21.664782       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-resizer-0\" objectUID=7f5fdc81-79ad-48ba-b2f1-9be6fe26ab8f kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:21.666071       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5498-644/csi-mockplugin-resizer-769d848796\" objectUID=ff8b4fcf-a463-4e33-95b4-014f745b901d kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:22.017601       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4000/pod-a0f9b256-dbe7-4dbf-a763-99d35954fe88\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:22.017656       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nE1010 15:51:22.056065       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:22.101536       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6236, name: inline-volume-tester2-b8xdx, uid: e19f848a-52d2-4bbf-b3a3-79e6ea073add] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1010 15:51:22.101824       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/inline-volume-tester2-b8xdx\" objectUID=e19f848a-52d2-4bbf-b3a3-79e6ea073add kind=\"Pod\" virtual=false\nI1010 15:51:22.102021       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/inline-volume-tester2-b8xdx\" objectUID=14b0357a-6016-4535-be2d-702818c6cd04 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:22.105576       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-6236, name: inline-volume-tester2-b8xdx, uid: 14b0357a-6016-4535-be2d-702818c6cd04] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6236, name: inline-volume-tester2-b8xdx, uid: e19f848a-52d2-4bbf-b3a3-79e6ea073add] is deletingDependents\nI1010 15:51:22.107791       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6236/inline-volume-tester2-b8xdx\" objectUID=14b0357a-6016-4535-be2d-702818c6cd04 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:51:22.120318       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/inline-volume-tester2-b8xdx\" objectUID=e19f848a-52d2-4bbf-b3a3-79e6ea073add kind=\"Pod\" virtual=false\nI1010 15:51:22.121119       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/inline-volume-tester2-b8xdx\" objectUID=14b0357a-6016-4535-be2d-702818c6cd04 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:22.123774       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6236, name: inline-volume-tester2-b8xdx, uid: e19f848a-52d2-4bbf-b3a3-79e6ea073add]\nI1010 15:51:22.129505       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5583/pvc-m8pbh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:22.132292       1 aws.go:3136] Existing security group ingress: sg-0180619fbb7b65369 []\nI1010 15:51:22.138278       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-5583/pvc-m8pbh\"\nI1010 15:51:22.134257       1 aws.go:3167] Adding security group ingress: sg-0180619fbb7b65369 [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI1010 15:51:22.203504       1 namespace_controller.go:185] Namespace has been deleted nettest-4386\nI1010 15:51:22.279884       1 namespace_controller.go:185] Namespace has been deleted replication-controller-1309\nI1010 15:51:22.410923       1 aws_loadbalancer.go:1009] Creating load balancer for deployment-2732/test-rolling-update-with-lb with name: a78b7da8b22a54ea0bd457c5e72ab9f0\nI1010 15:51:22.420541       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4000/pod-a0f9b256-dbe7-4dbf-a763-99d35954fe88\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:22.420568       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:22.448843       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4000/pvc-sfxwd\"\nI1010 15:51:22.512949       1 pv_controller.go:640] volume \"local-pv47r85\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:22.549893       1 pv_controller.go:879] volume \"local-pv47r85\" entered phase \"Released\"\nI1010 15:51:22.560812       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-4000/pvc-sfxwd\" was already processed\nE1010 15:51:22.804463       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:22.956882       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1105/pvc-4vzqk\"\nI1010 15:51:22.967059       1 pv_controller.go:640] volume \"pvc-e027039b-20eb-4019-bd51-39fcc2469a5b\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:51:22.974239       1 pv_controller.go:879] volume \"pvc-e027039b-20eb-4019-bd51-39fcc2469a5b\" entered phase \"Released\"\nI1010 15:51:22.978302       1 pv_controller.go:1340] isVolumeReleased[pvc-e027039b-20eb-4019-bd51-39fcc2469a5b]: volume is released\nE1010 15:51:23.044035       1 tokens_controller.go:262] error synchronizing serviceaccount services-8763/default: secrets \"default-token-l69mn\" is forbidden: unable to create new content in namespace services-8763 because it is being terminated\nI1010 15:51:23.131139       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-1105/pvc-4vzqk\" was already processed\nE1010 15:51:23.227933       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-2234/default: secrets \"default-token-hw672\" is forbidden: unable to create new content in namespace downward-api-2234 because it is being terminated\nI1010 15:51:23.281257       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05b33fa8195d49a83\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:23.305549       1 pv_controller.go:1340] isVolumeReleased[pvc-ee567f5d-0327-4f7f-97fd-cc45f494a791]: volume is released\nE1010 15:51:23.333417       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:23.516917       1 pv_controller_base.go:505] deletion of claim \"provisioning-6865/awskn6gd\" was already processed\nI1010 15:51:23.591850       1 pv_controller.go:879] volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" entered phase \"Bound\"\nI1010 15:51:23.591908       1 pv_controller.go:982] volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" bound to claim \"volumemode-5873/awsjrkl7\"\nI1010 15:51:23.598614       1 pv_controller.go:823] claim \"volumemode-5873/awsjrkl7\" entered phase \"Bound\"\nI1010 15:51:23.722217       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d7aab7408e921426\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:23.726535       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"a78b7da8b22a54ea0bd457c5e72ab9f0\"\nE1010 15:51:23.733514       1 controller.go:307] error processing service deployment-2732/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\n\\tstatus code: 403, request id: df40ee83-55a8-43a7-aa9d-3d42b1ff9444\"\nI1010 15:51:23.733829       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\\\n\\\\tstatus code: 403, request id: df40ee83-55a8-43a7-aa9d-3d42b1ff9444\\\"\"\nI1010 15:51:23.818613       1 garbagecollector.go:471] \"Processing object\" object=\"services-1728/verify-service-up-exec-pod-kdn5l\" objectUID=0cec6823-bd70-4b25-b3fb-8eb4db12ab35 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:23.822800       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1728/verify-service-up-exec-pod-kdn5l\" objectUID=0cec6823-bd70-4b25-b3fb-8eb4db12ab35 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE1010 15:51:24.105920       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nE1010 15:51:24.315768       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6047-8405/default: secrets \"default-token-6v555\" is forbidden: unable to create new content in namespace csi-mock-volumes-6047-8405 because it is being terminated\nE1010 15:51:25.096933       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:51:25.525091       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:25.744997       1 namespace_controller.go:185] Namespace has been deleted volume-1892\nE1010 15:51:25.757251       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-4000/default: secrets \"default-token-bs64v\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4000 because it is being terminated\nI1010 15:51:25.904502       1 namespace_controller.go:185] Namespace has been deleted kubectl-8126\nI1010 15:51:26.141425       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d7aab7408e921426\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:26.141839       1 event.go:291] \"Event occurred\" object=\"volumemode-5873/pod-f651b42b-ce77-4169-8d63-4efec39b0a0d\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\\\" \"\nE1010 15:51:26.274774       1 tokens_controller.go:262] error synchronizing serviceaccount apparmor-7171/default: secrets \"default-token-z5mmk\" is forbidden: unable to create new content in namespace apparmor-7171 because it is being terminated\nI1010 15:51:26.391517       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1232/pvc-2prtx\"\nI1010 15:51:26.399810       1 pv_controller.go:640] volume \"local-bdrj7\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:26.407717       1 pv_controller.go:879] volume \"local-bdrj7\" entered phase \"Released\"\nI1010 15:51:26.536853       1 pv_controller_base.go:505] deletion of claim \"provisioning-1232/pvc-2prtx\" was already processed\nI1010 15:51:26.751646       1 namespace_controller.go:185] Namespace has been deleted pods-7542\nE1010 15:51:26.871357       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5498-644/default: secrets \"default-token-qh2sm\" is forbidden: unable to create new content in namespace csi-mock-volumes-5498-644 because it is being terminated\nE1010 15:51:27.320136       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:51:27.458059       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5583/default: serviceaccounts \"default\" not found\nE1010 15:51:28.198552       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:28.245653       1 namespace_controller.go:185] Namespace has been deleted services-8763\nI1010 15:51:28.306008       1 namespace_controller.go:185] Namespace has been deleted downward-api-2234\nI1010 15:51:28.734760       1 controller.go:400] Ensuring load balancer for service deployment-2732/test-rolling-update-with-lb\nI1010 15:51:28.734908       1 aws.go:3915] EnsureLoadBalancer(e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io, deployment-2732, test-rolling-update-with-lb, sa-east-1, , [{ TCP <nil> 80 {0 80 } 31607}], map[])\nI1010 15:51:28.735215       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI1010 15:51:29.006042       1 aws.go:3136] Existing security group ingress: sg-0180619fbb7b65369 [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI1010 15:51:29.117669       1 aws_loadbalancer.go:1185] Creating additional load balancer tags for a78b7da8b22a54ea0bd457c5e72ab9f0\nI1010 15:51:29.142092       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"a78b7da8b22a54ea0bd457c5e72ab9f0\"\nE1010 15:51:29.148692       1 controller.go:307] error processing service deployment-2732/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\n\\tstatus code: 403, request id: 59ba77c7-db46-4d97-b53e-701c96bca77f\"\nI1010 15:51:29.148800       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\\\n\\\\tstatus code: 403, request id: 59ba77c7-db46-4d97-b53e-701c96bca77f\\\"\"\nI1010 15:51:29.353315       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6047-8405\nI1010 15:51:29.464728       1 namespace_controller.go:185] Namespace has been deleted kubectl-8136\nI1010 15:51:30.104978       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:30.166971       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-1669\nI1010 15:51:30.200121       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:51:30.293277       1 namespace_controller.go:185] Namespace has been deleted kubectl-8261\nI1010 15:51:30.767383       1 pv_controller.go:930] claim \"provisioning-657/pvc-2t76k\" bound to volume \"local-xm2pz\"\nI1010 15:51:30.767697       1 event.go:291] \"Event occurred\" object=\"volume-expand-8077/awssxt5p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:30.775069       1 pv_controller.go:879] volume \"local-xm2pz\" entered phase \"Bound\"\nI1010 15:51:30.775096       1 pv_controller.go:982] volume \"local-xm2pz\" bound to claim \"provisioning-657/pvc-2t76k\"\nI1010 15:51:30.783069       1 pv_controller.go:823] claim \"provisioning-657/pvc-2t76k\" entered phase \"Bound\"\nI1010 15:51:30.840689       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4000\nI1010 15:51:31.315602       1 namespace_controller.go:185] Namespace has been deleted apparmor-7171\nE1010 15:51:31.340984       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-1021/pvc-kgvk7: storageclass.storage.k8s.io \"volume-1021\" not found\nI1010 15:51:31.341130       1 event.go:291] \"Event occurred\" object=\"volume-1021/pvc-kgvk7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1021\\\" not found\"\nI1010 15:51:31.451645       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-7500\nI1010 15:51:31.490634       1 pv_controller.go:879] volume \"local-fq7rw\" entered phase \"Available\"\nE1010 15:51:31.806167       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-6344/pvc-2b22w: storageclass.storage.k8s.io \"provisioning-6344\" not found\nI1010 15:51:31.806257       1 event.go:291] \"Event occurred\" object=\"provisioning-6344/pvc-2b22w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6344\\\" not found\"\nI1010 15:51:31.879421       1 expand_controller.go:289] Ignoring the PVC \"volume-expand-5205/csi-hostpath7mpz9\" (uid: \"9a3b6318-9ab9-4e15-aa99-e2caea73815c\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI1010 15:51:31.879615       1 event.go:291] \"Event occurred\" object=\"volume-expand-5205/csi-hostpath7mpz9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI1010 15:51:31.907660       1 namespace_controller.go:185] Namespace has been deleted container-probe-1361\nI1010 15:51:31.961504       1 pv_controller.go:879] volume \"local-rkvwc\" entered phase \"Available\"\nI1010 15:51:32.083559       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5498-644\nI1010 15:51:32.583298       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5583\nI1010 15:51:32.593311       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-2084/test-quota\nI1010 15:51:32.648477       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.653279       1 event.go:291] \"Event occurred\" object=\"job-6104/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed--1-tqzzj\"\nI1010 15:51:32.653587       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.660020       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.660517       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.661323       1 event.go:291] \"Event occurred\" object=\"job-6104/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed--1-5cdtd\"\nI1010 15:51:32.664903       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.666201       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-j97lg\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01342bc3c22f91e4b\") from node \"ip-172-20-42-51.sa-east-1.compute.internal\" \nI1010 15:51:32.666638       1 event.go:291] \"Event occurred\" object=\"volume-1135/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-j97lg\\\" \"\nI1010 15:51:32.672313       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:32.676751       1 controller_ref_manager.go:232] patching pod replicaset-4131_pod-adoption-release to remove its controllerRef to apps/v1/ReplicaSet:pod-adoption-release\nI1010 15:51:32.686394       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-4131/pod-adoption-release\" objectUID=f471f022-8e01-40cf-8525-98983511d738 kind=\"ReplicaSet\" virtual=false\nI1010 15:51:32.688692       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4131/pod-adoption-release\" need=1 creating=1\nI1010 15:51:32.689714       1 garbagecollector.go:510] object [apps/v1/ReplicaSet, namespace: replicaset-4131, name: pod-adoption-release, uid: f471f022-8e01-40cf-8525-98983511d738]'s doesn't have an owner, continue on next item\nI1010 15:51:32.692951       1 event.go:291] \"Event occurred\" object=\"replicaset-4131/pod-adoption-release\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-adoption-release-g8mtk\"\nI1010 15:51:32.791479       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5583-1652/csi-mockplugin\nI1010 15:51:32.791480       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-5c5dd9468c\" objectUID=0ac92387-ec8d-43ba-a927-1eec61a446e2 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:32.791676       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-0\" objectUID=c3c3b1a9-e170-46ea-91d5-7c02108dcffb kind=\"Pod\" virtual=false\nI1010 15:51:32.793784       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-0\" objectUID=c3c3b1a9-e170-46ea-91d5-7c02108dcffb kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:32.793784       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-5c5dd9468c\" objectUID=0ac92387-ec8d-43ba-a927-1eec61a446e2 kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:33.224398       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-attacher-56b54cd87c\" objectUID=534f94d2-5d38-4c69-acd8-a727e8d4eb41 kind=\"ControllerRevision\" virtual=false\nI1010 15:51:33.224594       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5583-1652/csi-mockplugin-attacher\nI1010 15:51:33.224672       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-attacher-0\" objectUID=578129bd-670b-4fc1-b156-41affb26ffb2 kind=\"Pod\" virtual=false\nI1010 15:51:33.226447       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-attacher-0\" objectUID=578129bd-670b-4fc1-b156-41affb26ffb2 kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:33.226859       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5583-1652/csi-mockplugin-attacher-56b54cd87c\" objectUID=534f94d2-5d38-4c69-acd8-a727e8d4eb41 kind=\"ControllerRevision\" propagationPolicy=Background\nE1010 15:51:33.429721       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nE1010 15:51:34.317126       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-3288/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI1010 15:51:34.762506       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0555d632762a31487\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:34.771784       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0555d632762a31487\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:35.597720       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1105\nI1010 15:51:35.634084       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1105-1540/csi-mockplugin-57c8965bdf\" objectUID=98a9747d-4e69-4fc9-8606-6315d4a3e5de kind=\"ControllerRevision\" virtual=false\nI1010 15:51:35.634248       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1105-1540/csi-mockplugin\nI1010 15:51:35.634424       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1105-1540/csi-mockplugin-0\" objectUID=bb32f7e7-0bb1-4889-832f-62f242d0fd80 kind=\"Pod\" virtual=false\nI1010 15:51:35.640267       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1105-1540/csi-mockplugin-57c8965bdf\" objectUID=98a9747d-4e69-4fc9-8606-6315d4a3e5de kind=\"ControllerRevision\" propagationPolicy=Background\nI1010 15:51:35.640718       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1105-1540/csi-mockplugin-0\" objectUID=bb32f7e7-0bb1-4889-832f-62f242d0fd80 kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:35.691697       1 event.go:291] \"Event occurred\" object=\"provisioning-1718-5377/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1010 15:51:35.915869       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"topology-1406/pvc-nzwvw\"\nI1010 15:51:35.923183       1 pv_controller.go:640] volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:51:35.928740       1 pv_controller.go:879] volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" entered phase \"Released\"\nI1010 15:51:35.931220       1 pv_controller.go:1340] isVolumeReleased[pvc-a34f3781-0025-47c3-936f-c3af09e5f654]: volume is released\nE1010 15:51:36.080885       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:51:36.257589       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1232/default: secrets \"default-token-mrr2g\" is forbidden: unable to create new content in namespace provisioning-1232 because it is being terminated\nI1010 15:51:36.407684       1 event.go:291] \"Event occurred\" object=\"provisioning-1718/pvc-2p5d6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1718\\\" or manually created by system administrator\"\nI1010 15:51:37.088435       1 namespace_controller.go:185] Namespace has been deleted provisioning-6865\nI1010 15:51:37.609051       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:37.644083       1 pv_controller.go:879] volume \"local-pvv6c64\" entered phase \"Available\"\nI1010 15:51:37.783395       1 pv_controller.go:930] claim \"persistent-local-volumes-test-6542/pvc-k8xq7\" bound to volume \"local-pvv6c64\"\nI1010 15:51:37.790174       1 pv_controller.go:879] volume \"local-pvv6c64\" entered phase \"Bound\"\nI1010 15:51:37.790204       1 pv_controller.go:982] volume \"local-pvv6c64\" bound to claim \"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:37.795917       1 pv_controller.go:823] claim \"persistent-local-volumes-test-6542/pvc-k8xq7\" entered phase \"Bound\"\nI1010 15:51:38.213562       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nE1010 15:51:38.228782       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-4131/default: secrets \"default-token-rq9rx\" is forbidden: unable to create new content in namespace replicaset-4131 because it is being terminated\nI1010 15:51:38.243882       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4131/pod-adoption-release\" need=1 creating=1\nE1010 15:51:38.500747       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5583-1652/default: secrets \"default-token-cqz6t\" is forbidden: unable to create new content in namespace csi-mock-volumes-5583-1652 because it is being terminated\nE1010 15:51:38.949178       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:51:39.149568       1 controller.go:400] Ensuring load balancer for service deployment-2732/test-rolling-update-with-lb\nI1010 15:51:39.149629       1 aws.go:3915] EnsureLoadBalancer(e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io, deployment-2732, test-rolling-update-with-lb, sa-east-1, , [{ TCP <nil> 80 {0 80 } 31607}], map[])\nI1010 15:51:39.150193       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI1010 15:51:39.555181       1 aws.go:3136] Existing security group ingress: sg-0180619fbb7b65369 [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nE1010 15:51:39.608092       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-354/default: secrets \"default-token-7svh5\" is forbidden: unable to create new content in namespace subpath-354 because it is being terminated\nI1010 15:51:39.657211       1 aws_loadbalancer.go:1185] Creating additional load balancer tags for a78b7da8b22a54ea0bd457c5e72ab9f0\nI1010 15:51:39.712369       1 aws_loadbalancer.go:1212] Updating load-balancer attributes for \"a78b7da8b22a54ea0bd457c5e72ab9f0\"\nE1010 15:51:39.719679       1 controller.go:307] error processing service deployment-2732/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\n\\tstatus code: 403, request id: 275f255a-9fb3-4d0d-9f59-5c435239c957\"\nI1010 15:51:39.719838       1 event.go:291] \"Event occurred\" object=\"deployment-2732/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-2cea5de97a-6a582.test-cncf-aws.k8s.io/i-026433eec5f15a83b is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:sa-east-1:768319786644:loadbalancer/a78b7da8b22a54ea0bd457c5e72ab9f0\\\\n\\\\tstatus code: 403, request id: 275f255a-9fb3-4d0d-9f59-5c435239c957\\\"\"\nI1010 15:51:39.929053       1 pv_controller.go:879] volume \"pvc-2eab74ba-6b50-4c78-8e42-09eb0a02add6\" entered phase \"Bound\"\nI1010 15:51:39.929086       1 pv_controller.go:982] volume \"pvc-2eab74ba-6b50-4c78-8e42-09eb0a02add6\" bound to claim \"provisioning-1718/pvc-2p5d6\"\nI1010 15:51:39.938640       1 pv_controller.go:823] claim \"provisioning-1718/pvc-2p5d6\" entered phase \"Bound\"\nI1010 15:51:40.140261       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-657/pvc-2t76k\"\nI1010 15:51:40.147766       1 pv_controller.go:640] volume \"local-xm2pz\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:40.151656       1 pv_controller.go:879] volume \"local-xm2pz\" entered phase \"Released\"\nI1010 15:51:40.291813       1 pv_controller_base.go:505] deletion of claim \"provisioning-657/pvc-2t76k\" was already processed\nI1010 15:51:40.410620       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:40.418379       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:40.422170       1 event.go:291] \"Event occurred\" object=\"job-6104/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed--1-7h8r4\"\nI1010 15:51:40.423023       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:40.435379       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nE1010 15:51:41.073025       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-1105-1540/default: secrets \"default-token-jkk94\" is forbidden: unable to create new content in namespace csi-mock-volumes-1105-1540 because it is being terminated\nI1010 15:51:41.305730       1 namespace_controller.go:185] Namespace has been deleted provisioning-1232\nI1010 15:51:41.507746       1 pv_controller.go:1340] isVolumeReleased[pvc-a34f3781-0025-47c3-936f-c3af09e5f654]: volume is released\nI1010 15:51:41.624651       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-a34f3781-0025-47c3-936f-c3af09e5f654\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0555d632762a31487\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:41.660522       1 pv_controller_base.go:505] deletion of claim \"topology-1406/pvc-nzwvw\" was already processed\nI1010 15:51:41.970696       1 namespace_controller.go:185] Namespace has been deleted projected-6209\nI1010 15:51:42.263998       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-8004/e2e-test-webhook-fcvkz\" objectUID=c53a1ca6-230f-4f49-882e-71e3db057e67 kind=\"EndpointSlice\" virtual=false\nI1010 15:51:42.279105       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-8004/e2e-test-webhook-fcvkz\" objectUID=c53a1ca6-230f-4f49-882e-71e3db057e67 kind=\"EndpointSlice\" propagationPolicy=Background\nI1010 15:51:42.428228       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd\" objectUID=372eadcf-5370-4069-a0f7-491b6315a01d kind=\"ReplicaSet\" virtual=false\nI1010 15:51:42.429015       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-8004/sample-webhook-deployment\"\nI1010 15:51:42.435850       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd\" objectUID=372eadcf-5370-4069-a0f7-491b6315a01d kind=\"ReplicaSet\" propagationPolicy=Background\nI1010 15:51:42.439776       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd-zv99t\" objectUID=6ad8f20c-cea7-49d1-8193-7be8588ce602 kind=\"Pod\" virtual=false\nI1010 15:51:42.442753       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd-zv99t\" objectUID=6ad8f20c-cea7-49d1-8193-7be8588ce602 kind=\"Pod\" propagationPolicy=Background\nI1010 15:51:42.451206       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd-zv99t\" objectUID=c01092a8-9db1-46d5-bd86-bc006a163fb6 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:42.455811       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-8004/sample-webhook-deployment-78988fc6cd-zv99t\" objectUID=c01092a8-9db1-46d5-bd86-bc006a163fb6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI1010 15:51:42.741211       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.745573       1 event.go:291] \"Event occurred\" object=\"job-2341/foo\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: foo--1-d4f4m\"\nI1010 15:51:42.748206       1 event.go:291] \"Event occurred\" object=\"job-2341/foo\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: foo--1-wjl6j\"\nI1010 15:51:42.749871       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.753064       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.757267       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.757881       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.763777       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:42.780964       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:43.174656       1 namespace_controller.go:185] Namespace has been deleted resourcequota-2084\nI1010 15:51:43.579198       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5583-1652\nE1010 15:51:43.791898       1 namespace_controller.go:162] deletion of namespace apply-7214 failed: unexpected items still remain in namespace: apply-7214 for gvr: /v1, Resource=pods\nI1010 15:51:43.848814       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2eab74ba-6b50-4c78-8e42-09eb0a02add6\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1718^f51f44b4-29e1-11ec-9ef8-6627868fd8f9\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:51:44.417992       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2eab74ba-6b50-4c78-8e42-09eb0a02add6\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1718^f51f44b4-29e1-11ec-9ef8-6627868fd8f9\") from node \"ip-172-20-61-156.sa-east-1.compute.internal\" \nI1010 15:51:44.418178       1 event.go:291] \"Event occurred\" object=\"provisioning-1718/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2eab74ba-6b50-4c78-8e42-09eb0a02add6\\\" \"\nI1010 15:51:44.692711       1 namespace_controller.go:185] Namespace has been deleted subpath-354\nI1010 15:51:44.964524       1 utils.go:366] couldn't find ipfamilies for headless service: services-3071/externalname-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI1010 15:51:45.179478       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:45.384067       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:45.384258       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI1010 15:51:45.388602       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI1010 15:51:45.400654       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-3071/externalname-service\" need=2 creating=2\nI1010 15:51:45.404031       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:45.408204       1 event.go:291] \"Event occurred\" object=\"services-3071/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-65zcw\"\nI1010 15:51:45.414576       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:45.426192       1 event.go:291] \"Event occurred\" object=\"services-3071/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-g7dxh\"\nI1010 15:51:45.428872       1 event.go:291] \"Event occurred\" object=\"job-6104/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed--1-wswk6\"\nI1010 15:51:45.429323       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:45.435683       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:45.442559       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:45.459616       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:45.767429       1 pv_controller.go:930] claim \"volume-1021/pvc-kgvk7\" bound to volume \"local-fq7rw\"\nI1010 15:51:45.767732       1 event.go:291] \"Event occurred\" object=\"volume-expand-8077/awssxt5p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:45.775365       1 pv_controller.go:879] volume \"local-fq7rw\" entered phase \"Bound\"\nI1010 15:51:45.775627       1 pv_controller.go:982] volume \"local-fq7rw\" bound to claim \"volume-1021/pvc-kgvk7\"\nI1010 15:51:45.782067       1 pv_controller.go:823] claim \"volume-1021/pvc-kgvk7\" entered phase \"Bound\"\nI1010 15:51:45.782587       1 pv_controller.go:930] claim \"provisioning-6344/pvc-2b22w\" bound to volume \"local-rkvwc\"\nI1010 15:51:45.782945       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:45.792837       1 pv_controller.go:879] volume \"local-rkvwc\" entered phase \"Bound\"\nI1010 15:51:45.792863       1 pv_controller.go:982] volume \"local-rkvwc\" bound to claim \"provisioning-6344/pvc-2b22w\"\nI1010 15:51:45.799241       1 pv_controller.go:823] claim \"provisioning-6344/pvc-2b22w\" entered phase \"Bound\"\nE1010 15:51:46.801021       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1010 15:51:46.998408       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-657/default: secrets \"default-token-w8pws\" is forbidden: unable to create new content in namespace provisioning-657 because it is being terminated\nI1010 15:51:47.030174       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:47.990691       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:47.994695       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:48.389722       1 namespace_controller.go:185] Namespace has been deleted configmap-8721\nI1010 15:51:48.480507       1 namespace_controller.go:185] Namespace has been deleted replicaset-4131\nI1010 15:51:48.945927       1 pv_controller.go:879] volume \"pvc-623f628d-3128-4f7f-ae92-935dd52cadcc\" entered phase \"Bound\"\nI1010 15:51:48.946150       1 pv_controller.go:982] volume \"pvc-623f628d-3128-4f7f-ae92-935dd52cadcc\" bound to claim \"statefulset-2611/datadir-ss-1\"\nI1010 15:51:48.954023       1 pv_controller.go:823] claim \"statefulset-2611/datadir-ss-1\" entered phase \"Bound\"\nE1010 15:51:49.173675       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-3288/default: secrets \"default-token-tn9n9\" is forbidden: unable to create new content in namespace disruption-3288 because it is being terminated\nI1010 15:51:49.209841       1 job_controller.go:406] enqueueing job job-2341/foo\nI1010 15:51:49.407709       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-623f628d-3128-4f7f-ae92-935dd52cadcc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0857a98045dd05d8e\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:49.619102       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:49.619899       1 event.go:291] \"Event occurred\" object=\"job-6104/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI1010 15:51:49.626494       1 job_controller.go:406] enqueueing job job-6104/all-succeed\nI1010 15:51:50.708195       1 event.go:291] \"Event occurred\" object=\"volume-expand-8077/awssxt5p\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:50.711796       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-8077/awssxt5p\"\nI1010 15:51:51.135813       1 pv_controller.go:879] volume \"local-pvdkcjf\" entered phase \"Available\"\nI1010 15:51:51.273455       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4881/pvc-kzx8f\" bound to volume \"local-pvdkcjf\"\nI1010 15:51:51.283270       1 pv_controller.go:879] volume \"local-pvdkcjf\" entered phase \"Bound\"\nI1010 15:51:51.283334       1 pv_controller.go:982] volume \"local-pvdkcjf\" bound to claim \"persistent-local-volumes-test-4881/pvc-kzx8f\"\nI1010 15:51:51.285948       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-28ff1962-a44f-453a-a170-92bb088c1e9d\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:51.285973       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:51.290597       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4881/pvc-kzx8f\" entered phase \"Bound\"\nI1010 15:51:51.431100       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d7aab7408e921426\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:51.440263       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d7aab7408e921426\") on node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:51.829250       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-623f628d-3128-4f7f-ae92-935dd52cadcc\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0857a98045dd05d8e\") from node \"ip-172-20-33-168.sa-east-1.compute.internal\" \nI1010 15:51:51.829683       1 event.go:291] \"Event occurred\" object=\"statefulset-2611/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-623f628d-3128-4f7f-ae92-935dd52cadcc\\\" \"\nI1010 15:51:52.000713       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4881/pvc-kzx8f\"\nI1010 15:51:52.008459       1 pv_controller.go:640] volume \"local-pvdkcjf\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:52.013047       1 pv_controller.go:879] volume \"local-pvdkcjf\" entered phase \"Released\"\nI1010 15:51:52.130318       1 namespace_controller.go:185] Namespace has been deleted provisioning-657\nI1010 15:51:52.152099       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-4881/pvc-kzx8f\" was already processed\nI1010 15:51:52.271082       1 event.go:291] \"Event occurred\" object=\"provisioning-3829/awsvxh7z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1010 15:51:52.287802       1 namespace_controller.go:185] Namespace has been deleted webhook-8004-markers\nI1010 15:51:52.566773       1 event.go:291] \"Event occurred\" object=\"provisioning-3829/awsvxh7z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1010 15:51:52.568174       1 event.go:291] \"Event occurred\" object=\"provisioning-3829/awsvxh7z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE1010 15:51:52.899956       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1010 15:51:52.997171       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-5873/awsjrkl7\"\nI1010 15:51:53.004143       1 pv_controller.go:640] volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" is released and reclaim policy \"Delete\" will be executed\nI1010 15:51:53.008033       1 pv_controller.go:879] volume \"pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3\" entered phase \"Released\"\nI1010 15:51:53.011294       1 pv_controller.go:1340] isVolumeReleased[pvc-14ee3cf4-8d19-4cd3-87f8-88695c211dd3]: volume is released\nI1010 15:51:53.116591       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-28ff1962-a44f-453a-a170-92bb088c1e9d\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.118229       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.124627       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-0bb4edb7-ea48-4e0a-be37-56d508504324\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.125716       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.127818       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-28ff1962-a44f-453a-a170-92bb088c1e9d\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.127885       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.154416       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-28ff1962-a44f-453a-a170-92bb088c1e9d\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.154434       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.157780       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6542/pod-28ff1962-a44f-453a-a170-92bb088c1e9d\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.157800       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.161934       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6542/pvc-k8xq7\"\nI1010 15:51:53.166905       1 pv_controller.go:640] volume \"local-pvv6c64\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:53.169843       1 pv_controller.go:879] volume \"local-pvv6c64\" entered phase \"Released\"\nI1010 15:51:53.173578       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-6542/pvc-k8xq7\" was already processed\nI1010 15:51:53.863843       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1135/pvc-s8br4\"\nI1010 15:51:53.870390       1 pv_controller.go:640] volume \"aws-j97lg\" is released and reclaim policy \"Retain\" will be executed\nI1010 15:51:53.873342       1 pv_controller.go:879] volume \"aws-j97lg\" entered phase \"Released\"\nI1010 15:51:53.961550       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-9410/simpletest.rc\" need=10 creating=10\nI1010 15:51:53.965492       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lgjwq\"\nI1010 15:51:53.994172       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-k5k5t\"\nI1010 15:51:54.041093       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-926st\"\nI1010 15:51:54.041120       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hvxpx\"\nI1010 15:51:54.041191       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-w7w6m\"\nI1010 15:51:54.041210       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-t6qkr\"\nI1010 15:51:54.041259       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-57zph\"\nI1010 15:51:54.043427       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lc5gl\"\nI1010 15:51:54.044137       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wvzxp\"\nI1010 15:51:54.058725       1 event.go:291] \"Event occurred\" object=\"gc-9410/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-lfqwl\"\nI1010 15:51:54.837504       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6236, name: inline-volume-tester-rfwkl, uid: 0a78f5ee-0cad-40f3-9042-38c4ceaeac9c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1010 15:51:54.838133       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/inline-volume-tester-rfwkl\" objectUID=257fa8ba-605a-4f75-82f5-cdbf64355656 kind=\"CiliumEndpoint\" virtual=false\nI1010 15:51:54.838317       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6236/in