This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-12 18:14
Elapsed30m56s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 128 lines ...
I1012 18:15:57.518590    4806 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I1012 18:15:57.525968    4806 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-424-ge8e9f04492/linux/amd64/kops
I1012 18:15:58.716208    4806 up.go:43] Cleaning up any leaked resources from previous cluster
I1012 18:15:58.716253    4806 dumplogs.go:40] /logs/artifacts/19ce6e99-2b88-11ec-aee5-323daf952f06/kops toolbox dump --name e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I1012 18:15:58.775280    4825 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1012 18:15:58.775416    4825 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io" not found
W1012 18:15:59.330859    4806 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1012 18:15:59.330937    4806 down.go:48] /logs/artifacts/19ce6e99-2b88-11ec-aee5-323daf952f06/kops delete cluster --name e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --yes
I1012 18:15:59.388056    4835 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1012 18:15:59.388179    4835 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io" not found
I1012 18:15:59.887485    4806 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/12 18:15:59 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1012 18:15:59.897507    4806 http.go:37] curl https://ip.jsb.workers.dev
I1012 18:15:59.997960    4806 up.go:144] /logs/artifacts/19ce6e99-2b88-11ec-aee5-323daf952f06/kops create cluster --name e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20211001 --channel=alpha --networking=kopeio --container-runtime=containerd --node-size=t3.large --admin-access 34.68.176.126/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-1a --master-size c5.large
I1012 18:16:00.021261    4846 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1012 18:16:00.021362    4846 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I1012 18:16:00.054458    4846 create_cluster.go:838] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1012 18:16:00.658766    4846 new_cluster.go:1077]  Cloud Provider ID = aws
... skipping 31 lines ...

I1012 18:16:23.865016    4806 up.go:181] /logs/artifacts/19ce6e99-2b88-11ec-aee5-323daf952f06/kops validate cluster --name e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I1012 18:16:23.891402    4865 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I1012 18:16:23.891513    4865 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io

W1012 18:16:24.966647    4865 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:16:35.009296    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:16:45.045486    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:16:55.082531    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:05.130057    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:15.165464    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:25.202213    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:35.238500    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:45.271604    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:17:55.307747    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:05.343741    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:15.415324    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:25.476056    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:35.524702    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:45.557428    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:18:55.593683    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:05.630248    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:15.661523    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:25.819649    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:35.849713    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:45.897768    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:19:55.963234    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 18:20:05.992532    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 8 lines ...
Machine	i-0e4cc36ee7feab59e				machine "i-0e4cc36ee7feab59e" has not yet joined cluster
Machine	i-0f6a463126dfbc120				machine "i-0f6a463126dfbc120" has not yet joined cluster
Node	ip-172-20-43-113.us-west-1.compute.internal	master "ip-172-20-43-113.us-west-1.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-6c8944dbdc-5mxb9		system-cluster-critical pod "coredns-6c8944dbdc-5mxb9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-rnqpw	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-rnqpw" is pending

Validation Failed
W1012 18:20:17.849835    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 9 lines ...
Machine	i-0f6a463126dfbc120								machine "i-0f6a463126dfbc120" has not yet joined cluster
Node	ip-172-20-43-113.us-west-1.compute.internal					master "ip-172-20-43-113.us-west-1.compute.internal" is missing kube-controller-manager pod
Pod	kube-system/coredns-6c8944dbdc-5mxb9						system-cluster-critical pod "coredns-6c8944dbdc-5mxb9" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-rnqpw					system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-rnqpw" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-43-113.us-west-1.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-43-113.us-west-1.compute.internal" is pending

Validation Failed
W1012 18:20:29.038477    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 10 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-rnqpw	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-rnqpw" is pending
Pod	kube-system/ebs-csi-node-7bjvs			system-node-critical pod "ebs-csi-node-7bjvs" is pending
Pod	kube-system/ebs-csi-node-98wxq			system-node-critical pod "ebs-csi-node-98wxq" is pending
Pod	kube-system/ebs-csi-node-x6v6l			system-node-critical pod "ebs-csi-node-x6v6l" is pending
Pod	kube-system/kopeio-networking-agent-ncvbn	system-node-critical pod "kopeio-networking-agent-ncvbn" is pending

Validation Failed
W1012 18:20:40.142032    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-37-53.us-west-1.compute.internal	node "ip-172-20-37-53.us-west-1.compute.internal" of role "node" is not ready
Pod	kube-system/ebs-csi-node-98wxq			system-node-critical pod "ebs-csi-node-98wxq" is pending
Pod	kube-system/ebs-csi-node-krphg			system-node-critical pod "ebs-csi-node-krphg" is pending

Validation Failed
W1012 18:20:51.280756    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 21 lines ...
ip-172-20-59-223.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-56-153.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-56-153.us-west-1.compute.internal" is pending

Validation Failed
W1012 18:21:13.795713    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
W1012 18:21:23.814899    4865 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

NODE STATUS
... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-47-26.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-47-26.us-west-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-59-223.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-59-223.us-west-1.compute.internal" is pending

Validation Failed
W1012 18:21:35.090228    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 6 lines ...
ip-172-20-59-223.us-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-37-53.us-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-37-53.us-west-1.compute.internal" is pending

Validation Failed
W1012 18:21:46.255521    4865 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.large	4	4	us-west-1a

... skipping 485 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 331 lines ...
STEP: Destroying namespace "apply-3573" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:18.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W1012 18:24:18.749129    5574 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 12 18:24:18.749: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1836
STEP: calling kubectl wait --for=delete
Oct 12 18:24:18.863: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-679 wait --for=delete pod/doesnotexist'
Oct 12 18:24:19.534: INFO: stderr: ""
Oct 12 18:24:19.534: INFO: stdout: ""
Oct 12 18:24:19.534: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-679 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:19.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-679" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:19.961: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:20.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4272" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:20.307: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
Oct 12 18:24:21.139: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [2.439 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:22.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9745" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:22.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crictl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
W1012 18:24:19.984595    5536 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 12 18:24:19.984: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Oct 12 18:24:20.139: INFO: Waiting up to 5m0s for pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f" in namespace "var-expansion-6994" to be "Succeeded or Failed"
Oct 12 18:24:20.189: INFO: Pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.990008ms
Oct 12 18:24:22.240: INFO: Pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101695112s
Oct 12 18:24:24.293: INFO: Pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154041285s
Oct 12 18:24:26.344: INFO: Pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205211965s
STEP: Saw pod success
Oct 12 18:24:26.344: INFO: Pod "var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f" satisfied condition "Succeeded or Failed"
Oct 12 18:24:26.395: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f container dapi-container: <nil>
STEP: delete the pod
Oct 12 18:24:26.546: INFO: Waiting for pod var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f to disappear
Oct 12 18:24:26.596: INFO: Pod var-expansion-73af6db7-3106-452d-bcdd-d5276126bf8f no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.114 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:26.766: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
Oct 12 18:24:20.582: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-0de405b4-1ecb-4634-a1f8-4fbaeae0e1c7
STEP: Creating a pod to test consume configMaps
Oct 12 18:24:20.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e" in namespace "configmap-9862" to be "Succeeded or Failed"
Oct 12 18:24:20.835: INFO: Pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.253266ms
Oct 12 18:24:22.886: INFO: Pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100528858s
Oct 12 18:24:24.936: INFO: Pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150168122s
Oct 12 18:24:26.991: INFO: Pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205404721s
STEP: Saw pod success
Oct 12 18:24:26.991: INFO: Pod "pod-configmaps-3aad139c-c528-425d-a869-670392ae077e" satisfied condition "Succeeded or Failed"
Oct 12 18:24:27.041: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-configmaps-3aad139c-c528-425d-a869-670392ae077e container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:24:27.504: INFO: Waiting for pod pod-configmaps-3aad139c-c528-425d-a869-670392ae077e to disappear
Oct 12 18:24:27.553: INFO: Pod pod-configmaps-3aad139c-c528-425d-a869-670392ae077e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.054 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:27.714: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
Oct 12 18:24:18.769: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-cabc4891-7cb0-4191-aa2e-e43e75315f40
STEP: Creating a pod to test consume configMaps
Oct 12 18:24:18.992: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1" in namespace "projected-9764" to be "Succeeded or Failed"
Oct 12 18:24:19.042: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 49.614704ms
Oct 12 18:24:21.097: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104877921s
Oct 12 18:24:23.151: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159482771s
Oct 12 18:24:25.201: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209265351s
Oct 12 18:24:27.255: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.263597548s
STEP: Saw pod success
Oct 12 18:24:27.256: INFO: Pod "pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1" satisfied condition "Succeeded or Failed"
Oct 12 18:24:27.305: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:24:27.898: INFO: Waiting for pod pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1 to disappear
Oct 12 18:24:27.948: INFO: Pod pod-projected-configmaps-022349b5-660d-42e5-bd4e-40e50e663fc1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.541 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:9.959 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:28.512: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 101 lines ...
• [SLOW TEST:10.955 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:24:27.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb" in namespace "downward-api-2460" to be "Succeeded or Failed"
Oct 12 18:24:27.298: INFO: Pod "downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb": Phase="Pending", Reason="", readiness=false. Elapsed: 49.752742ms
Oct 12 18:24:29.348: INFO: Pod "downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100092018s
Oct 12 18:24:31.399: INFO: Pod "downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150812069s
STEP: Saw pod success
Oct 12 18:24:31.399: INFO: Pod "downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb" satisfied condition "Succeeded or Failed"
Oct 12 18:24:31.449: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb container client-container: <nil>
STEP: delete the pod
Oct 12 18:24:31.561: INFO: Waiting for pod downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb to disappear
Oct 12 18:24:31.611: INFO: Pod downwardapi-volume-e9a7913d-84d8-4621-8d33-7eb5bfc118fb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:31.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2460" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:31.742: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
Oct 12 18:24:19.441: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-8a920000-6343-45c8-a38d-1fc9141a494a
STEP: Creating a pod to test consume configMaps
Oct 12 18:24:19.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d" in namespace "configmap-4557" to be "Succeeded or Failed"
Oct 12 18:24:19.725: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.54306ms
Oct 12 18:24:21.777: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1052843s
Oct 12 18:24:23.830: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158221247s
Oct 12 18:24:25.882: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210481299s
Oct 12 18:24:27.934: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262484882s
Oct 12 18:24:29.987: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.315294844s
Oct 12 18:24:32.039: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.367164867s
STEP: Saw pod success
Oct 12 18:24:32.039: INFO: Pod "pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d" satisfied condition "Succeeded or Failed"
Oct 12 18:24:32.093: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:24:32.221: INFO: Waiting for pod pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d to disappear
Oct 12 18:24:32.273: INFO: Pod pod-configmaps-6e5e4a50-eb63-4323-a5a1-b5dad1db540d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.913 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct 12 18:24:19.834: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4fa6356c-e0b5-4cc7-8673-efd921ffcf63
STEP: Creating a pod to test consume secrets
Oct 12 18:24:21.788: INFO: Waiting up to 5m0s for pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8" in namespace "secrets-9334" to be "Succeeded or Failed"
Oct 12 18:24:21.837: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.237674ms
Oct 12 18:24:23.888: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100359196s
Oct 12 18:24:25.941: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152898287s
Oct 12 18:24:27.991: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203342259s
Oct 12 18:24:30.041: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253158922s
Oct 12 18:24:32.095: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306547349s
Oct 12 18:24:34.145: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.357189265s
STEP: Saw pod success
Oct 12 18:24:34.145: INFO: Pod "pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8" satisfied condition "Succeeded or Failed"
Oct 12 18:24:34.195: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8 container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 18:24:34.307: INFO: Waiting for pod pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8 to disappear
Oct 12 18:24:34.357: INFO: Pod pod-secrets-21dddd4e-c253-450c-a054-8969c2172aa8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:15.932 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:34.585: INFO: Only supported for providers [gce gke] (not aws)
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:36.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3883" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 462 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:41.045: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 224 lines ...
• [SLOW TEST:25.955 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:44.754: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:45.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3283" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:45.259: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:10.181 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:47.375: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:42.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:24:43.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45" in namespace "downward-api-7882" to be "Succeeded or Failed"
Oct 12 18:24:43.163: INFO: Pod "downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 49.379801ms
Oct 12 18:24:45.217: INFO: Pod "downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102931473s
Oct 12 18:24:47.268: INFO: Pod "downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154748317s
STEP: Saw pod success
Oct 12 18:24:47.268: INFO: Pod "downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45" satisfied condition "Succeeded or Failed"
Oct 12 18:24:47.318: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45 container client-container: <nil>
STEP: delete the pod
Oct 12 18:24:47.428: INFO: Waiting for pod downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45 to disappear
Oct 12 18:24:47.477: INFO: Pod downwardapi-volume-536eaae8-8584-4d2a-bf87-5c3e3c67bd45 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:47.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7651" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:17.084 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:47.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:51.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3675" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:51.399: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
Oct 12 18:24:32.342: INFO: PersistentVolume nfs-cfhst found and phase=Bound (52.768817ms)
Oct 12 18:24:32.396: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dw7zq] to have phase Bound
Oct 12 18:24:32.484: INFO: PersistentVolumeClaim pvc-dw7zq found and phase=Bound (51.560308ms)
STEP: Checking pod has write access to PersistentVolumes
Oct 12 18:24:32.540: INFO: Creating nfs test pod
Oct 12 18:24:32.592: INFO: Pod should terminate with exitcode 0 (success)
Oct 12 18:24:32.592: INFO: Waiting up to 5m0s for pod "pvc-tester-7fpdk" in namespace "pv-6570" to be "Succeeded or Failed"
Oct 12 18:24:32.647: INFO: Pod "pvc-tester-7fpdk": Phase="Pending", Reason="", readiness=false. Elapsed: 54.856347ms
Oct 12 18:24:34.699: INFO: Pod "pvc-tester-7fpdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107648919s
Oct 12 18:24:36.753: INFO: Pod "pvc-tester-7fpdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161305426s
STEP: Saw pod success
Oct 12 18:24:36.753: INFO: Pod "pvc-tester-7fpdk" satisfied condition "Succeeded or Failed"
Oct 12 18:24:36.753: INFO: Pod pvc-tester-7fpdk succeeded 
Oct 12 18:24:36.753: INFO: Deleting pod "pvc-tester-7fpdk" in namespace "pv-6570"
Oct 12 18:24:36.826: INFO: Wait up to 5m0s for pod "pvc-tester-7fpdk" to be fully deleted
Oct 12 18:24:36.931: INFO: Creating nfs test pod
Oct 12 18:24:36.985: INFO: Pod should terminate with exitcode 0 (success)
Oct 12 18:24:36.985: INFO: Waiting up to 5m0s for pod "pvc-tester-xgckm" in namespace "pv-6570" to be "Succeeded or Failed"
Oct 12 18:24:37.036: INFO: Pod "pvc-tester-xgckm": Phase="Pending", Reason="", readiness=false. Elapsed: 50.590218ms
Oct 12 18:24:39.088: INFO: Pod "pvc-tester-xgckm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102274051s
Oct 12 18:24:41.139: INFO: Pod "pvc-tester-xgckm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154038775s
Oct 12 18:24:43.190: INFO: Pod "pvc-tester-xgckm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204932066s
Oct 12 18:24:45.244: INFO: Pod "pvc-tester-xgckm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258715874s
STEP: Saw pod success
Oct 12 18:24:45.244: INFO: Pod "pvc-tester-xgckm" satisfied condition "Succeeded or Failed"
Oct 12 18:24:45.244: INFO: Pod pvc-tester-xgckm succeeded 
Oct 12 18:24:45.244: INFO: Deleting pod "pvc-tester-xgckm" in namespace "pv-6570"
Oct 12 18:24:45.299: INFO: Wait up to 5m0s for pod "pvc-tester-xgckm" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Oct 12 18:24:45.555: INFO: Deleting PVC pvc-qb8mm to trigger reclamation of PV nfs-qbxn4
Oct 12 18:24:45.555: INFO: Deleting PersistentVolumeClaim "pvc-qb8mm"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:46.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 12 18:24:47.218: INFO: Waiting up to 5m0s for pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2" in namespace "downward-api-1107" to be "Succeeded or Failed"
Oct 12 18:24:47.268: INFO: Pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2": Phase="Pending", Reason="", readiness=false. Elapsed: 50.638683ms
Oct 12 18:24:49.319: INFO: Pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100871714s
Oct 12 18:24:51.369: INFO: Pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150942335s
Oct 12 18:24:53.420: INFO: Pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201953936s
STEP: Saw pod success
Oct 12 18:24:53.420: INFO: Pod "downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2" satisfied condition "Succeeded or Failed"
Oct 12 18:24:53.470: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2 container dapi-container: <nil>
STEP: delete the pod
Oct 12 18:24:53.615: INFO: Waiting for pod downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2 to disappear
Oct 12 18:24:53.665: INFO: Pod downward-api-ae9edc63-9b5c-4af6-b8c3-a71da6b837a2 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.853 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:24:53.779: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
Oct 12 18:24:47.583: INFO: PersistentVolumeClaim pvc-h4dzc found but phase is Pending instead of Bound.
Oct 12 18:24:49.633: INFO: PersistentVolumeClaim pvc-h4dzc found and phase=Bound (8.251242862s)
Oct 12 18:24:49.633: INFO: Waiting up to 3m0s for PersistentVolume local-7vf8z to have phase Bound
Oct 12 18:24:49.685: INFO: PersistentVolume local-7vf8z found and phase=Bound (52.264669ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-jmfh
STEP: Creating a pod to test exec-volume-test
Oct 12 18:24:49.846: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-jmfh" in namespace "volume-800" to be "Succeeded or Failed"
Oct 12 18:24:49.896: INFO: Pod "exec-volume-test-preprovisionedpv-jmfh": Phase="Pending", Reason="", readiness=false. Elapsed: 49.420517ms
Oct 12 18:24:51.948: INFO: Pod "exec-volume-test-preprovisionedpv-jmfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101705368s
Oct 12 18:24:53.999: INFO: Pod "exec-volume-test-preprovisionedpv-jmfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152802056s
STEP: Saw pod success
Oct 12 18:24:53.999: INFO: Pod "exec-volume-test-preprovisionedpv-jmfh" satisfied condition "Succeeded or Failed"
Oct 12 18:24:54.049: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-jmfh container exec-container-preprovisionedpv-jmfh: <nil>
STEP: delete the pod
Oct 12 18:24:54.155: INFO: Waiting for pod exec-volume-test-preprovisionedpv-jmfh to disappear
Oct 12 18:24:54.204: INFO: Pod exec-volume-test-preprovisionedpv-jmfh no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-jmfh
Oct 12 18:24:54.204: INFO: Deleting pod "exec-volume-test-preprovisionedpv-jmfh" in namespace "volume-800"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:26.752 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should verify that all csinodes have volume limits
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumelimits.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits","total":-1,"completed":2,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:56.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 12 18:24:57.011: INFO: Waiting up to 5m0s for pod "security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7" in namespace "security-context-3684" to be "Succeeded or Failed"
Oct 12 18:24:57.060: INFO: Pod "security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 49.428875ms
Oct 12 18:24:59.111: INFO: Pod "security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099933169s
STEP: Saw pod success
Oct 12 18:24:59.111: INFO: Pod "security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7" satisfied condition "Succeeded or Failed"
Oct 12 18:24:59.160: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7 container test-container: <nil>
STEP: delete the pod
Oct 12 18:24:59.267: INFO: Waiting for pod security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7 to disappear
Oct 12 18:24:59.317: INFO: Pod security-context-75d9a8c5-33a9-4f10-ac69-8885428ba3e7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:24:59.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-3684" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 11 lines ...
Oct 12 18:24:20.832: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3526gqhph
STEP: creating a claim
Oct 12 18:24:20.884: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-gvjk
STEP: Creating a pod to test subpath
Oct 12 18:24:21.038: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gvjk" in namespace "provisioning-3526" to be "Succeeded or Failed"
Oct 12 18:24:21.089: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 50.889153ms
Oct 12 18:24:23.139: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101040849s
Oct 12 18:24:25.190: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151695028s
Oct 12 18:24:27.243: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205328095s
Oct 12 18:24:29.297: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258592414s
Oct 12 18:24:31.347: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309456048s
Oct 12 18:24:33.399: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.360695174s
Oct 12 18:24:35.449: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.411272125s
Oct 12 18:24:37.503: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465120867s
Oct 12 18:24:39.554: INFO: Pod "pod-subpath-test-dynamicpv-gvjk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.516423084s
STEP: Saw pod success
Oct 12 18:24:39.554: INFO: Pod "pod-subpath-test-dynamicpv-gvjk" satisfied condition "Succeeded or Failed"
Oct 12 18:24:39.617: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-gvjk container test-container-subpath-dynamicpv-gvjk: <nil>
STEP: delete the pod
Oct 12 18:24:39.744: INFO: Waiting for pod pod-subpath-test-dynamicpv-gvjk to disappear
Oct 12 18:24:39.796: INFO: Pod pod-subpath-test-dynamicpv-gvjk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-gvjk
Oct 12 18:24:39.796: INFO: Deleting pod "pod-subpath-test-dynamicpv-gvjk" in namespace "provisioning-3526"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:00.628: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
• [SLOW TEST:7.386 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:01.211: INFO: Only supported for providers [openstack] (not aws)
... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:31.925 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":3,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:01.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-b0781b80-dafb-43ea-b76b-4daebac1a5c9
STEP: Creating a pod to test consume configMaps
Oct 12 18:25:01.630: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6" in namespace "projected-6003" to be "Succeeded or Failed"
Oct 12 18:25:01.679: INFO: Pod "pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.335211ms
Oct 12 18:25:03.729: INFO: Pod "pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099364595s
STEP: Saw pod success
Oct 12 18:25:03.730: INFO: Pod "pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6" satisfied condition "Succeeded or Failed"
Oct 12 18:25:03.782: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:25:03.912: INFO: Waiting for pod pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6 to disappear
Oct 12 18:25:03.965: INFO: Pod pod-projected-configmaps-e06d0639-c22b-41e2-b5bc-ea916725bfe6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:03.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6003" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:04.092: INFO: Only supported for providers [vsphere] (not aws)
... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:04.773: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 104 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:24:37.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-2wp6
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 18:24:37.748: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2wp6" in namespace "subpath-5800" to be "Succeeded or Failed"
Oct 12 18:24:37.798: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.394155ms
Oct 12 18:24:39.849: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101575457s
Oct 12 18:24:41.903: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155291733s
Oct 12 18:24:43.954: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206208469s
Oct 12 18:24:46.006: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 8.257989869s
Oct 12 18:24:48.059: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 10.311559207s
... skipping 3 lines ...
Oct 12 18:24:56.271: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 18.522990636s
Oct 12 18:24:58.322: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 20.573909979s
Oct 12 18:25:00.424: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 22.676785761s
Oct 12 18:25:02.475: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Running", Reason="", readiness=true. Elapsed: 24.727645093s
Oct 12 18:25:04.529: INFO: Pod "pod-subpath-test-configmap-2wp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.781259401s
STEP: Saw pod success
Oct 12 18:25:04.529: INFO: Pod "pod-subpath-test-configmap-2wp6" satisfied condition "Succeeded or Failed"
Oct 12 18:25:04.580: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-configmap-2wp6 container test-container-subpath-configmap-2wp6: <nil>
STEP: delete the pod
Oct 12 18:25:04.698: INFO: Waiting for pod pod-subpath-test-configmap-2wp6 to disappear
Oct 12 18:25:04.750: INFO: Pod pod-subpath-test-configmap-2wp6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2wp6
Oct 12 18:25:04.750: INFO: Deleting pod "pod-subpath-test-configmap-2wp6" in namespace "subpath-5800"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:04.983: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:06.749: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:07.413: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
• [SLOW TEST:7.508 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:10.965: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:11.130: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-394cf23d-be67-44a8-94b4-1534fe7c092b
STEP: Creating a pod to test consume configMaps
Oct 12 18:25:05.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e" in namespace "projected-8121" to be "Succeeded or Failed"
Oct 12 18:25:05.290: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e": Phase="Pending", Reason="", readiness=false. Elapsed: 52.137483ms
Oct 12 18:25:07.341: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103718231s
Oct 12 18:25:09.392: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155013806s
Oct 12 18:25:11.447: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20910126s
Oct 12 18:25:13.501: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.263104269s
STEP: Saw pod success
Oct 12 18:25:13.501: INFO: Pod "pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e" satisfied condition "Succeeded or Failed"
Oct 12 18:25:13.553: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:25:13.668: INFO: Waiting for pod pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e to disappear
Oct 12 18:25:13.720: INFO: Pod pod-projected-configmaps-b5bdae31-bfe9-4492-9353-13533fadb90e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.949 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:9.742 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:52
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":6,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:14.203: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
Oct 12 18:24:54.495: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:24:54.958: INFO: Exec stderr: ""
Oct 12 18:24:59.111: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-2902"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-2902"/host; echo host > "/var/lib/kubelet/mount-propagation-2902"/host/file] Namespace:mount-propagation-2902 PodName:hostexec-ip-172-20-56-153.us-west-1.compute.internal-tg6t9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 18:24:59.111: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:24:59.757: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2902 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:24:59.757: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:00.199: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Oct 12 18:25:00.249: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2902 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:00.249: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:00.735: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Oct 12 18:25:00.788: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2902 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:00.788: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:01.199: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:01.248: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2902 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:01.248: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:01.639: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:01.688: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2902 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:01.689: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:02.145: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Oct 12 18:25:02.195: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2902 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:02.195: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:02.603: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:02.656: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2902 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:02.656: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:03.077: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:03.127: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2902 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:03.127: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:03.610: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Oct 12 18:25:03.661: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2902 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:03.661: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:04.134: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:04.184: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2902 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:04.184: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:04.564: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:04.613: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2902 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:04.613: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:05.003: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:05.052: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2902 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:05.052: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:05.479: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:05.530: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2902 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:05.530: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:05.923: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:05.976: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2902 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:05.976: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:06.375: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Oct 12 18:25:06.425: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2902 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:06.425: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:06.872: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:06.922: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2902 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:06.922: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:07.353: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Oct 12 18:25:07.403: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2902 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:07.403: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:07.895: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:07.947: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2902 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:07.947: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:08.489: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:08.539: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2902 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:08.539: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:09.013: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 18:25:09.062: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2902 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 18:25:09.062: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:09.501: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Oct 12 18:25:09.501: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c pidof kubelet] Namespace:mount-propagation-2902 PodName:hostexec-ip-172-20-56-153.us-west-1.compute.internal-tg6t9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 18:25:09.501: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:10.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4289 -m cat "/var/lib/kubelet/mount-propagation-2902/host/file"] Namespace:mount-propagation-2902 PodName:hostexec-ip-172-20-56-153.us-west-1.compute.internal-tg6t9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 18:25:10.113: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 18:25:10.664: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c nsenter -t 4289 -m cat "/var/lib/kubelet/mount-propagation-2902/master/file"] Namespace:mount-propagation-2902 PodName:hostexec-ip-172-20-56-153.us-west-1.compute.internal-tg6t9 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 18:25:10.664: INFO: >>> kubeConfig: /root/.kube/config
... skipping 29 lines ...
• [SLOW TEST:56.887 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts within defined scopes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:83
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:13.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-b83e949a-2b84-4c63-901a-8776cb2265f8
STEP: Creating a pod to test consume secrets
Oct 12 18:25:14.269: INFO: Waiting up to 5m0s for pod "pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be" in namespace "secrets-5809" to be "Succeeded or Failed"
Oct 12 18:25:14.319: INFO: Pod "pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be": Phase="Pending", Reason="", readiness=false. Elapsed: 49.768942ms
Oct 12 18:25:16.369: INFO: Pod "pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099412436s
STEP: Saw pod success
Oct 12 18:25:16.369: INFO: Pod "pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be" satisfied condition "Succeeded or Failed"
Oct 12 18:25:16.419: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 18:25:16.543: INFO: Waiting for pod pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be to disappear
Oct 12 18:25:16.595: INFO: Pod pod-secrets-700cb18e-5a47-4f46-8283-edeba97933be no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 13 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Oct 12 18:25:11.458: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 18:25:11.509: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-g6j9
STEP: Creating a pod to test subpath
Oct 12 18:25:11.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-g6j9" in namespace "provisioning-197" to be "Succeeded or Failed"
Oct 12 18:25:11.612: INFO: Pod "pod-subpath-test-inlinevolume-g6j9": Phase="Pending", Reason="", readiness=false. Elapsed: 49.952525ms
Oct 12 18:25:13.664: INFO: Pod "pod-subpath-test-inlinevolume-g6j9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102311024s
Oct 12 18:25:15.714: INFO: Pod "pod-subpath-test-inlinevolume-g6j9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152763846s
Oct 12 18:25:17.765: INFO: Pod "pod-subpath-test-inlinevolume-g6j9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203766385s
STEP: Saw pod success
Oct 12 18:25:17.765: INFO: Pod "pod-subpath-test-inlinevolume-g6j9" satisfied condition "Succeeded or Failed"
Oct 12 18:25:17.816: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-g6j9 container test-container-subpath-inlinevolume-g6j9: <nil>
STEP: delete the pod
Oct 12 18:25:17.944: INFO: Waiting for pod pod-subpath-test-inlinevolume-g6j9 to disappear
Oct 12 18:25:17.994: INFO: Pod pod-subpath-test-inlinevolume-g6j9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-g6j9
Oct 12 18:25:17.994: INFO: Deleting pod "pod-subpath-test-inlinevolume-g6j9" in namespace "provisioning-197"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct 12 18:24:49.252: INFO: PersistentVolumeClaim pvc-mz72j found but phase is Pending instead of Bound.
Oct 12 18:24:51.302: INFO: PersistentVolumeClaim pvc-mz72j found and phase=Bound (12.351357036s)
Oct 12 18:24:51.302: INFO: Waiting up to 3m0s for PersistentVolume local-f2k6g to have phase Bound
Oct 12 18:24:51.352: INFO: PersistentVolume local-f2k6g found and phase=Bound (49.513152ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-47d6
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 18:24:51.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-47d6" in namespace "provisioning-6951" to be "Succeeded or Failed"
Oct 12 18:24:51.558: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.291128ms
Oct 12 18:24:53.615: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109611638s
Oct 12 18:24:55.666: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160609226s
Oct 12 18:24:57.718: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212049927s
Oct 12 18:24:59.768: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 8.261873718s
Oct 12 18:25:01.819: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 10.313506679s
... skipping 3 lines ...
Oct 12 18:25:10.038: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 18.531948064s
Oct 12 18:25:12.088: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 20.581929339s
Oct 12 18:25:14.142: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 22.635820817s
Oct 12 18:25:16.193: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Running", Reason="", readiness=true. Elapsed: 24.68718834s
Oct 12 18:25:18.243: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.737128922s
STEP: Saw pod success
Oct 12 18:25:18.243: INFO: Pod "pod-subpath-test-preprovisionedpv-47d6" satisfied condition "Succeeded or Failed"
Oct 12 18:25:18.295: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-47d6 container test-container-subpath-preprovisionedpv-47d6: <nil>
STEP: delete the pod
Oct 12 18:25:18.401: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-47d6 to disappear
Oct 12 18:25:18.458: INFO: Pod pod-subpath-test-preprovisionedpv-47d6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-47d6
Oct 12 18:25:18.458: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-47d6" in namespace "provisioning-6951"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:19.909: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:20.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4353" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":4,"skipped":53,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:20.141: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:02.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:20.488: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:21.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1708" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:22.555: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 235 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:23.656: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:16.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 53 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:24.366: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
Oct 12 18:25:02.606: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:02.656: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:03.017: INFO: Unable to read jessie_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:03.067: INFO: Unable to read jessie_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:03.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:03.168: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:03.481: INFO: Lookups using dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc failed for: [wheezy_udp@dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_udp@dns-test-service.dns-5555.svc.cluster.local jessie_tcp@dns-test-service.dns-5555.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local]

Oct 12 18:25:08.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:08.585: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:08.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:08.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:09.054: INFO: Unable to read jessie_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:09.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:09.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:09.205: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:09.518: INFO: Lookups using dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc failed for: [wheezy_udp@dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_udp@dns-test-service.dns-5555.svc.cluster.local jessie_tcp@dns-test-service.dns-5555.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local]

Oct 12 18:25:13.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:13.582: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:13.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:13.682: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:14.040: INFO: Unable to read jessie_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:14.090: INFO: Unable to read jessie_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:14.145: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:14.195: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:14.503: INFO: Lookups using dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc failed for: [wheezy_udp@dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_udp@dns-test-service.dns-5555.svc.cluster.local jessie_tcp@dns-test-service.dns-5555.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local]

Oct 12 18:25:18.533: INFO: Unable to read wheezy_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:18.583: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:18.633: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:18.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:19.114: INFO: Unable to read jessie_udp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:19.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:19.221: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:19.274: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local from pod dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc: the server could not find the requested resource (get pods dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc)
Oct 12 18:25:19.584: INFO: Lookups using dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc failed for: [wheezy_udp@dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@dns-test-service.dns-5555.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_udp@dns-test-service.dns-5555.svc.cluster.local jessie_tcp@dns-test-service.dns-5555.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5555.svc.cluster.local]

Oct 12 18:25:24.613: INFO: DNS probes using dns-5555/dns-test-ea5ee5c7-3801-4783-95b9-c6debec026bc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 15 lines ...
Oct 12 18:25:13.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 12 18:25:14.156: INFO: Waiting up to 5m0s for pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747" in namespace "downward-api-5800" to be "Succeeded or Failed"
Oct 12 18:25:14.213: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 56.546607ms
Oct 12 18:25:16.266: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110230978s
Oct 12 18:25:18.317: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161315224s
Oct 12 18:25:20.370: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214088856s
Oct 12 18:25:22.423: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266887206s
Oct 12 18:25:24.477: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Pending", Reason="", readiness=false. Elapsed: 10.320706405s
Oct 12 18:25:26.528: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.372049128s
STEP: Saw pod success
Oct 12 18:25:26.528: INFO: Pod "downward-api-709046ae-6c6b-408b-b3d4-315028219747" satisfied condition "Succeeded or Failed"
Oct 12 18:25:26.579: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod downward-api-709046ae-6c6b-408b-b3d4-315028219747 container dapi-container: <nil>
STEP: delete the pod
Oct 12 18:25:26.737: INFO: Waiting for pod downward-api-709046ae-6c6b-408b-b3d4-315028219747 to disappear
Oct 12 18:25:26.787: INFO: Pod downward-api-709046ae-6c6b-408b-b3d4-315028219747 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.062 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:27.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-2740" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:27.465: INFO: Only supported for providers [gce gke] (not aws)
... skipping 37 lines ...
• [SLOW TEST:16.585 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":4,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 133 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:28.888: INFO: Only supported for providers [azure] (not aws)
... skipping 62 lines ...
• [SLOW TEST:6.278 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:29.231: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:30.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-2280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:72.361 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a failing exec liveness probe that took longer than the timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:258
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:31.679: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 28 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct 12 18:25:23.918: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 18:25:23.969: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pwkk
STEP: Creating a pod to test subpath
Oct 12 18:25:24.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pwkk" in namespace "provisioning-2683" to be "Succeeded or Failed"
Oct 12 18:25:24.083: INFO: Pod "pod-subpath-test-inlinevolume-pwkk": Phase="Pending", Reason="", readiness=false. Elapsed: 50.221121ms
Oct 12 18:25:26.133: INFO: Pod "pod-subpath-test-inlinevolume-pwkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100704441s
Oct 12 18:25:28.186: INFO: Pod "pod-subpath-test-inlinevolume-pwkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153624698s
Oct 12 18:25:30.237: INFO: Pod "pod-subpath-test-inlinevolume-pwkk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204372003s
Oct 12 18:25:32.287: INFO: Pod "pod-subpath-test-inlinevolume-pwkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.254398032s
STEP: Saw pod success
Oct 12 18:25:32.287: INFO: Pod "pod-subpath-test-inlinevolume-pwkk" satisfied condition "Succeeded or Failed"
Oct 12 18:25:32.337: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-pwkk container test-container-volume-inlinevolume-pwkk: <nil>
STEP: delete the pod
Oct 12 18:25:32.455: INFO: Waiting for pod pod-subpath-test-inlinevolume-pwkk to disappear
Oct 12 18:25:32.504: INFO: Pod pod-subpath-test-inlinevolume-pwkk no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pwkk
Oct 12 18:25:32.504: INFO: Deleting pod "pod-subpath-test-inlinevolume-pwkk" in namespace "provisioning-2683"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:32.731: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:25.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 12 18:25:25.492: INFO: Waiting up to 5m0s for pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e" in namespace "downward-api-5391" to be "Succeeded or Failed"
Oct 12 18:25:25.541: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.521634ms
Oct 12 18:25:27.594: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101723764s
Oct 12 18:25:29.663: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170907696s
Oct 12 18:25:31.715: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2226866s
Oct 12 18:25:33.764: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272392919s
Oct 12 18:25:35.815: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.323079673s
STEP: Saw pod success
Oct 12 18:25:35.815: INFO: Pod "downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e" satisfied condition "Succeeded or Failed"
Oct 12 18:25:35.865: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e container dapi-container: <nil>
STEP: delete the pod
Oct 12 18:25:35.981: INFO: Waiting for pod downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e to disappear
Oct 12 18:25:36.030: INFO: Pod downward-api-15bda02a-a11b-4be0-aa74-71ba3de3cc1e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.971 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:36.152: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 240 lines ...
• [SLOW TEST:9.384 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "services-7202" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:36.990: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:37.570: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 218 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":2,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:38.056: INFO: Only supported for providers [gce gke] (not aws)
... skipping 221 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:38.601: INFO: Only supported for providers [openstack] (not aws)
... skipping 23 lines ...
Oct 12 18:25:37.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 12 18:25:37.330: INFO: Waiting up to 5m0s for pod "pod-50381366-333d-421b-9a22-0888b8555e29" in namespace "emptydir-8252" to be "Succeeded or Failed"
Oct 12 18:25:37.380: INFO: Pod "pod-50381366-333d-421b-9a22-0888b8555e29": Phase="Pending", Reason="", readiness=false. Elapsed: 50.792926ms
Oct 12 18:25:39.432: INFO: Pod "pod-50381366-333d-421b-9a22-0888b8555e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10195206s
Oct 12 18:25:41.485: INFO: Pod "pod-50381366-333d-421b-9a22-0888b8555e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155223398s
STEP: Saw pod success
Oct 12 18:25:41.485: INFO: Pod "pod-50381366-333d-421b-9a22-0888b8555e29" satisfied condition "Succeeded or Failed"
Oct 12 18:25:41.546: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-50381366-333d-421b-9a22-0888b8555e29 container test-container: <nil>
STEP: delete the pod
Oct 12 18:25:41.684: INFO: Waiting for pod pod-50381366-333d-421b-9a22-0888b8555e29 to disappear
Oct 12 18:25:41.742: INFO: Pod pod-50381366-333d-421b-9a22-0888b8555e29 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:41.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8252" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
• [SLOW TEST:26.700 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:37.627: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Oct 12 18:25:37.887: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 18:25:37.887: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-lskg
STEP: Creating a pod to test exec-volume-test
Oct 12 18:25:37.942: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-lskg" in namespace "volume-5124" to be "Succeeded or Failed"
Oct 12 18:25:37.992: INFO: Pod "exec-volume-test-inlinevolume-lskg": Phase="Pending", Reason="", readiness=false. Elapsed: 50.417585ms
Oct 12 18:25:40.043: INFO: Pod "exec-volume-test-inlinevolume-lskg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101344049s
Oct 12 18:25:42.095: INFO: Pod "exec-volume-test-inlinevolume-lskg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153287198s
Oct 12 18:25:44.148: INFO: Pod "exec-volume-test-inlinevolume-lskg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205954655s
STEP: Saw pod success
Oct 12 18:25:44.148: INFO: Pod "exec-volume-test-inlinevolume-lskg" satisfied condition "Succeeded or Failed"
Oct 12 18:25:44.200: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod exec-volume-test-inlinevolume-lskg container exec-container-inlinevolume-lskg: <nil>
STEP: delete the pod
Oct 12 18:25:44.322: INFO: Waiting for pod exec-volume-test-inlinevolume-lskg to disappear
Oct 12 18:25:44.373: INFO: Pod exec-volume-test-inlinevolume-lskg no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-lskg
Oct 12 18:25:44.373: INFO: Deleting pod "exec-volume-test-inlinevolume-lskg" in namespace "volume-5124"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:44.564: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:46.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9401" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:46.869: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:47.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:25:48.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-568" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 155 lines ...
Oct 12 18:25:17.877: INFO: PersistentVolumeClaim csi-hostpath5ln2q found but phase is Pending instead of Bound.
Oct 12 18:25:19.927: INFO: PersistentVolumeClaim csi-hostpath5ln2q found but phase is Pending instead of Bound.
Oct 12 18:25:21.980: INFO: PersistentVolumeClaim csi-hostpath5ln2q found but phase is Pending instead of Bound.
Oct 12 18:25:24.032: INFO: PersistentVolumeClaim csi-hostpath5ln2q found and phase=Bound (14.412959529s)
STEP: Creating pod pod-subpath-test-dynamicpv-cb8z
STEP: Creating a pod to test subpath
Oct 12 18:25:24.186: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cb8z" in namespace "provisioning-1057" to be "Succeeded or Failed"
Oct 12 18:25:24.237: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 50.480706ms
Oct 12 18:25:26.291: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104422412s
Oct 12 18:25:28.344: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157793598s
Oct 12 18:25:30.395: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20875139s
Oct 12 18:25:32.448: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262007358s
Oct 12 18:25:34.499: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312664305s
Oct 12 18:25:36.548: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362307021s
Oct 12 18:25:38.599: INFO: Pod "pod-subpath-test-dynamicpv-cb8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.412587642s
STEP: Saw pod success
Oct 12 18:25:38.599: INFO: Pod "pod-subpath-test-dynamicpv-cb8z" satisfied condition "Succeeded or Failed"
Oct 12 18:25:38.651: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-cb8z container test-container-volume-dynamicpv-cb8z: <nil>
STEP: delete the pod
Oct 12 18:25:38.767: INFO: Waiting for pod pod-subpath-test-dynamicpv-cb8z to disappear
Oct 12 18:25:38.820: INFO: Pod pod-subpath-test-dynamicpv-cb8z no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cb8z
Oct 12 18:25:38.820: INFO: Deleting pod "pod-subpath-test-dynamicpv-cb8z" in namespace "provisioning-1057"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:53.434: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 196 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":10,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:13.654 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":9,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:58.241: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
Oct 12 18:25:15.888: INFO: PersistentVolumeClaim pvc-v7trv found but phase is Pending instead of Bound.
Oct 12 18:25:17.939: INFO: PersistentVolumeClaim pvc-v7trv found and phase=Bound (2.104071116s)
STEP: Deleting the previously created pod
Oct 12 18:25:36.200: INFO: Deleting pod "pvc-volume-tester-95sd4" in namespace "csi-mock-volumes-5813"
Oct 12 18:25:36.259: INFO: Wait up to 5m0s for pod "pvc-volume-tester-95sd4" to be fully deleted
STEP: Checking CSI driver logs
Oct 12 18:25:40.419: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7762d4b9-bef1-4ca1-aa72-56eb73da9d54/volumes/kubernetes.io~csi/pvc-abaf1b7a-16e6-49f6-96db-3eaf315b0f66/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-95sd4
Oct 12 18:25:40.419: INFO: Deleting pod "pvc-volume-tester-95sd4" in namespace "csi-mock-volumes-5813"
STEP: Deleting claim pvc-v7trv
Oct 12 18:25:40.569: INFO: Waiting up to 2m0s for PersistentVolume pvc-abaf1b7a-16e6-49f6-96db-3eaf315b0f66 to get deleted
Oct 12 18:25:40.619: INFO: PersistentVolume pvc-abaf1b7a-16e6-49f6-96db-3eaf315b0f66 found and phase=Released (49.942825ms)
Oct 12 18:25:42.670: INFO: PersistentVolume pvc-abaf1b7a-16e6-49f6-96db-3eaf315b0f66 found and phase=Released (2.100493807s)
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":4,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 18 lines ...
Oct 12 18:25:47.941: INFO: PersistentVolumeClaim pvc-h6t5k found but phase is Pending instead of Bound.
Oct 12 18:25:49.992: INFO: PersistentVolumeClaim pvc-h6t5k found and phase=Bound (8.250727216s)
Oct 12 18:25:49.992: INFO: Waiting up to 3m0s for PersistentVolume local-kjxdk to have phase Bound
Oct 12 18:25:50.041: INFO: PersistentVolume local-kjxdk found and phase=Bound (49.605622ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6rdr
STEP: Creating a pod to test exec-volume-test
Oct 12 18:25:50.192: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6rdr" in namespace "volume-7250" to be "Succeeded or Failed"
Oct 12 18:25:50.247: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr": Phase="Pending", Reason="", readiness=false. Elapsed: 55.513645ms
Oct 12 18:25:52.307: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114760266s
Oct 12 18:25:54.357: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165389923s
Oct 12 18:25:56.408: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21593477s
Oct 12 18:25:58.458: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26604455s
STEP: Saw pod success
Oct 12 18:25:58.458: INFO: Pod "exec-volume-test-preprovisionedpv-6rdr" satisfied condition "Succeeded or Failed"
Oct 12 18:25:58.512: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-6rdr container exec-container-preprovisionedpv-6rdr: <nil>
STEP: delete the pod
Oct 12 18:25:58.628: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6rdr to disappear
Oct 12 18:25:58.681: INFO: Pod exec-volume-test-preprovisionedpv-6rdr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6rdr
Oct 12 18:25:58.681: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6rdr" in namespace "volume-7250"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:25:59.443: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
Oct 12 18:25:47.700: INFO: PersistentVolumeClaim pvc-88bgg found but phase is Pending instead of Bound.
Oct 12 18:25:49.759: INFO: PersistentVolumeClaim pvc-88bgg found and phase=Bound (4.159498755s)
Oct 12 18:25:49.759: INFO: Waiting up to 3m0s for PersistentVolume local-w84qg to have phase Bound
Oct 12 18:25:49.808: INFO: PersistentVolume local-w84qg found and phase=Bound (49.410114ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jx7j
STEP: Creating a pod to test subpath
Oct 12 18:25:49.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jx7j" in namespace "provisioning-5424" to be "Succeeded or Failed"
Oct 12 18:25:50.012: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 49.578275ms
Oct 12 18:25:52.061: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099268614s
Oct 12 18:25:54.112: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150443966s
Oct 12 18:25:56.166: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204466316s
Oct 12 18:25:58.218: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255542437s
Oct 12 18:26:00.269: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307449943s
STEP: Saw pod success
Oct 12 18:26:00.270: INFO: Pod "pod-subpath-test-preprovisionedpv-jx7j" satisfied condition "Succeeded or Failed"
Oct 12 18:26:00.320: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-jx7j container test-container-volume-preprovisionedpv-jx7j: <nil>
STEP: delete the pod
Oct 12 18:26:00.429: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jx7j to disappear
Oct 12 18:26:00.480: INFO: Pod pod-subpath-test-preprovisionedpv-jx7j no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jx7j
Oct 12 18:26:00.480: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jx7j" in namespace "provisioning-5424"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:56.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 12 18:25:56.528: INFO: Waiting up to 5m0s for pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3" in namespace "emptydir-1508" to be "Succeeded or Failed"
Oct 12 18:25:56.582: INFO: Pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3": Phase="Pending", Reason="", readiness=false. Elapsed: 53.758678ms
Oct 12 18:25:58.631: INFO: Pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103414879s
Oct 12 18:26:00.682: INFO: Pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154689977s
Oct 12 18:26:02.733: INFO: Pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.205119672s
STEP: Saw pod success
Oct 12 18:26:02.733: INFO: Pod "pod-afb060ce-575b-4d26-b05c-2cc7550858d3" satisfied condition "Succeeded or Failed"
Oct 12 18:26:02.783: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-afb060ce-575b-4d26-b05c-2cc7550858d3 container test-container: <nil>
STEP: delete the pod
Oct 12 18:26:02.890: INFO: Waiting for pod pod-afb060ce-575b-4d26-b05c-2cc7550858d3 to disappear
Oct 12 18:26:02.940: INFO: Pod pod-afb060ce-575b-4d26-b05c-2cc7550858d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.823 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:03.076: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:59.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286

      Disabled temporarily, reopen after #73168 is fixed

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":7,"skipped":30,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:49.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
STEP: updating the pod
Oct 12 18:26:00.872: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1"
Oct 12 18:26:00.872: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1" in namespace "pods-7641" to be "terminated due to deadline exceeded"
Oct 12 18:26:00.922: INFO: Pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1": Phase="Running", Reason="", readiness=true. Elapsed: 49.470849ms
Oct 12 18:26:02.972: INFO: Pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1": Phase="Running", Reason="", readiness=true. Elapsed: 2.10044432s
Oct 12 18:26:05.024: INFO: Pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1": Phase="Running", Reason="", readiness=true. Elapsed: 4.151782211s
Oct 12 18:26:07.076: INFO: Pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 6.204249868s
Oct 12 18:26:07.076: INFO: Pod "pod-update-activedeadlineseconds-d38e2603-eae8-4fc9-a496-c4ab414decc1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7641" for this suite.

... skipping 19 lines ...
Oct 12 18:24:59.693: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3558wzc5m
STEP: creating a claim
Oct 12 18:24:59.743: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vlh6
STEP: Creating a pod to test subpath
Oct 12 18:24:59.894: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vlh6" in namespace "provisioning-3558" to be "Succeeded or Failed"
Oct 12 18:24:59.947: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 53.306819ms
Oct 12 18:25:01.997: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103397551s
Oct 12 18:25:04.048: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154023923s
Oct 12 18:25:06.101: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206971308s
Oct 12 18:25:08.151: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256820705s
Oct 12 18:25:10.201: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306515317s
... skipping 3 lines ...
Oct 12 18:25:18.403: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.509272591s
Oct 12 18:25:20.453: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.559061993s
Oct 12 18:25:22.504: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.610019046s
Oct 12 18:25:24.595: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.700968425s
Oct 12 18:25:26.656: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.761594154s
STEP: Saw pod success
Oct 12 18:25:26.656: INFO: Pod "pod-subpath-test-dynamicpv-vlh6" satisfied condition "Succeeded or Failed"
Oct 12 18:25:26.728: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-vlh6 container test-container-subpath-dynamicpv-vlh6: <nil>
STEP: delete the pod
Oct 12 18:25:26.837: INFO: Waiting for pod pod-subpath-test-dynamicpv-vlh6 to disappear
Oct 12 18:25:26.886: INFO: Pod pod-subpath-test-dynamicpv-vlh6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vlh6
Oct 12 18:25:26.887: INFO: Deleting pod "pod-subpath-test-dynamicpv-vlh6" in namespace "provisioning-3558"
STEP: Creating pod pod-subpath-test-dynamicpv-vlh6
STEP: Creating a pod to test subpath
Oct 12 18:25:26.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vlh6" in namespace "provisioning-3558" to be "Succeeded or Failed"
Oct 12 18:25:27.036: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.353418ms
Oct 12 18:25:29.086: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099286824s
Oct 12 18:25:31.135: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149149253s
Oct 12 18:25:33.185: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198670344s
Oct 12 18:25:35.235: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.248870601s
Oct 12 18:25:37.286: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.299702413s
... skipping 2 lines ...
Oct 12 18:25:43.440: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.453554645s
Oct 12 18:25:45.491: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.504179876s
Oct 12 18:25:47.543: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.556723494s
Oct 12 18:25:49.597: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.610461723s
Oct 12 18:25:51.648: INFO: Pod "pod-subpath-test-dynamicpv-vlh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.661299108s
STEP: Saw pod success
Oct 12 18:25:51.648: INFO: Pod "pod-subpath-test-dynamicpv-vlh6" satisfied condition "Succeeded or Failed"
Oct 12 18:25:51.700: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-vlh6 container test-container-subpath-dynamicpv-vlh6: <nil>
STEP: delete the pod
Oct 12 18:25:51.811: INFO: Waiting for pod pod-subpath-test-dynamicpv-vlh6 to disappear
Oct 12 18:25:51.860: INFO: Pod pod-subpath-test-dynamicpv-vlh6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vlh6
Oct 12 18:25:51.860: INFO: Deleting pod "pod-subpath-test-dynamicpv-vlh6" in namespace "provisioning-3558"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":10,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:07.550: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:08.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:08.286: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
• [SLOW TEST:11.467 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":10,"skipped":60,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":30,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:07.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:10.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1121" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":9,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:10.254: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 150 lines ...
• [SLOW TEST:14.108 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:10.888: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:17.512: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 205 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:17.758: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Oct 12 18:26:02.559: INFO: PersistentVolumeClaim pvc-994r7 found but phase is Pending instead of Bound.
Oct 12 18:26:04.616: INFO: PersistentVolumeClaim pvc-994r7 found and phase=Bound (12.369608069s)
Oct 12 18:26:04.616: INFO: Waiting up to 3m0s for PersistentVolume local-ljdnc to have phase Bound
Oct 12 18:26:04.668: INFO: PersistentVolume local-ljdnc found and phase=Bound (51.00156ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-l6mr
STEP: Creating a pod to test subpath
Oct 12 18:26:04.824: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-l6mr" in namespace "provisioning-7350" to be "Succeeded or Failed"
Oct 12 18:26:04.874: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 50.452304ms
Oct 12 18:26:06.925: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101666319s
Oct 12 18:26:08.977: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153432599s
Oct 12 18:26:11.031: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207408293s
Oct 12 18:26:13.083: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258805089s
Oct 12 18:26:15.134: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310450413s
Oct 12 18:26:17.186: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.362344354s
STEP: Saw pod success
Oct 12 18:26:17.186: INFO: Pod "pod-subpath-test-preprovisionedpv-l6mr" satisfied condition "Succeeded or Failed"
Oct 12 18:26:17.239: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-l6mr container test-container-volume-preprovisionedpv-l6mr: <nil>
STEP: delete the pod
Oct 12 18:26:17.347: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-l6mr to disappear
Oct 12 18:26:17.398: INFO: Pod pod-subpath-test-preprovisionedpv-l6mr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-l6mr
Oct 12 18:26:17.398: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-l6mr" in namespace "provisioning-7350"
... skipping 129 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":4,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":4,"skipped":17,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:42.706 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not create pods when created in suspend state
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should not create pods when created in suspend state","total":-1,"completed":6,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:19.029: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 114 lines ...
• [SLOW TEST:9.461 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":11,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:19.315: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:20.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-589" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":12,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:20.256: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:25:53.009: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Oct 12 18:26:02.660: INFO: PersistentVolumeClaim pvc-rx7qf found but phase is Pending instead of Bound.
Oct 12 18:26:04.713: INFO: PersistentVolumeClaim pvc-rx7qf found and phase=Bound (6.207742322s)
Oct 12 18:26:04.713: INFO: Waiting up to 3m0s for PersistentVolume local-g2qvp to have phase Bound
Oct 12 18:26:04.767: INFO: PersistentVolume local-g2qvp found and phase=Bound (54.237993ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j69n
STEP: Creating a pod to test subpath
Oct 12 18:26:04.925: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j69n" in namespace "provisioning-8564" to be "Succeeded or Failed"
Oct 12 18:26:04.976: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 51.601792ms
Oct 12 18:26:07.028: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1032715s
Oct 12 18:26:09.093: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168169615s
Oct 12 18:26:11.145: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220437412s
Oct 12 18:26:13.198: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272987699s
Oct 12 18:26:15.252: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326985619s
Oct 12 18:26:17.305: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379913039s
Oct 12 18:26:19.357: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.432133372s
STEP: Saw pod success
Oct 12 18:26:19.357: INFO: Pod "pod-subpath-test-preprovisionedpv-j69n" satisfied condition "Succeeded or Failed"
Oct 12 18:26:19.409: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-j69n container test-container-subpath-preprovisionedpv-j69n: <nil>
STEP: delete the pod
Oct 12 18:26:19.585: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j69n to disappear
Oct 12 18:26:19.642: INFO: Pod pod-subpath-test-preprovisionedpv-j69n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j69n
Oct 12 18:26:19.643: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j69n" in namespace "provisioning-8564"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":49,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":5,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:23.176: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 215 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":6,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:25.826: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":6,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:25.917: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 206 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:27.780: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 147 lines ...
• [SLOW TEST:60.496 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":54,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:28.569: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":5,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:32.064: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 106 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:32.537: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Oct 12 18:26:27.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 12 18:26:28.203: INFO: Waiting up to 5m0s for pod "pod-b283b63c-4025-4fee-8086-c48e789cf1bd" in namespace "emptydir-9980" to be "Succeeded or Failed"
Oct 12 18:26:28.254: INFO: Pod "pod-b283b63c-4025-4fee-8086-c48e789cf1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 50.685043ms
Oct 12 18:26:30.304: INFO: Pod "pod-b283b63c-4025-4fee-8086-c48e789cf1bd": Phase="Running", Reason="", readiness=true. Elapsed: 2.101099153s
Oct 12 18:26:32.356: INFO: Pod "pod-b283b63c-4025-4fee-8086-c48e789cf1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152655347s
STEP: Saw pod success
Oct 12 18:26:32.356: INFO: Pod "pod-b283b63c-4025-4fee-8086-c48e789cf1bd" satisfied condition "Succeeded or Failed"
Oct 12 18:26:32.406: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-b283b63c-4025-4fee-8086-c48e789cf1bd container test-container: <nil>
STEP: delete the pod
Oct 12 18:26:32.519: INFO: Waiting for pod pod-b283b63c-4025-4fee-8086-c48e789cf1bd to disappear
Oct 12 18:26:32.569: INFO: Pod pod-b283b63c-4025-4fee-8086-c48e789cf1bd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 36 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 85 lines ...
• [SLOW TEST:124.797 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:216
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5914" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":6,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:33.435: INFO: Only supported for providers [vsphere] (not aws)
... skipping 28 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Oct 12 18:26:26.196: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 18:26:26.253: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gxph
STEP: Creating a pod to test subpath
Oct 12 18:26:26.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gxph" in namespace "provisioning-4519" to be "Succeeded or Failed"
Oct 12 18:26:26.360: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Pending", Reason="", readiness=false. Elapsed: 52.058804ms
Oct 12 18:26:28.413: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104926524s
Oct 12 18:26:30.466: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157879621s
Oct 12 18:26:32.519: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21138839s
Oct 12 18:26:34.570: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262606168s
Oct 12 18:26:36.623: INFO: Pod "pod-subpath-test-inlinevolume-gxph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.314947054s
STEP: Saw pod success
Oct 12 18:26:36.623: INFO: Pod "pod-subpath-test-inlinevolume-gxph" satisfied condition "Succeeded or Failed"
Oct 12 18:26:36.674: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-gxph container test-container-subpath-inlinevolume-gxph: <nil>
STEP: delete the pod
Oct 12 18:26:36.788: INFO: Waiting for pod pod-subpath-test-inlinevolume-gxph to disappear
Oct 12 18:26:36.838: INFO: Pod pod-subpath-test-inlinevolume-gxph no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gxph
Oct 12 18:26:36.838: INFO: Deleting pod "pod-subpath-test-inlinevolume-gxph" in namespace "provisioning-4519"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:37.106: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 155 lines ...
Oct 12 18:26:33.767: INFO: PersistentVolumeClaim pvc-b8qsf found but phase is Pending instead of Bound.
Oct 12 18:26:35.828: INFO: PersistentVolumeClaim pvc-b8qsf found and phase=Bound (4.16721131s)
Oct 12 18:26:35.828: INFO: Waiting up to 3m0s for PersistentVolume local-6pdm4 to have phase Bound
Oct 12 18:26:35.877: INFO: PersistentVolume local-6pdm4 found and phase=Bound (49.053868ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bfkl
STEP: Creating a pod to test subpath
Oct 12 18:26:36.030: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bfkl" in namespace "provisioning-1043" to be "Succeeded or Failed"
Oct 12 18:26:36.079: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl": Phase="Pending", Reason="", readiness=false. Elapsed: 49.46354ms
Oct 12 18:26:38.129: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09954764s
Oct 12 18:26:40.180: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150702899s
Oct 12 18:26:42.231: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201197841s
Oct 12 18:26:44.281: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.251761401s
STEP: Saw pod success
Oct 12 18:26:44.282: INFO: Pod "pod-subpath-test-preprovisionedpv-bfkl" satisfied condition "Succeeded or Failed"
Oct 12 18:26:44.331: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-bfkl container test-container-volume-preprovisionedpv-bfkl: <nil>
STEP: delete the pod
Oct 12 18:26:44.451: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bfkl to disappear
Oct 12 18:26:44.500: INFO: Pod pod-subpath-test-preprovisionedpv-bfkl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bfkl
Oct 12 18:26:44.501: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bfkl" in namespace "provisioning-1043"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":65,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:47.950: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
• [SLOW TEST:46.725 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete pods when suspended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:111
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":5,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:37.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-3017/secret-test-cc959481-4a08-4eba-a772-d68cba42e1af
STEP: Creating a pod to test consume secrets
Oct 12 18:26:37.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9" in namespace "secrets-3017" to be "Succeeded or Failed"
Oct 12 18:26:37.538: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Pending", Reason="", readiness=false. Elapsed: 51.941942ms
Oct 12 18:26:39.589: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10308192s
Oct 12 18:26:41.641: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155105295s
Oct 12 18:26:43.692: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206357648s
Oct 12 18:26:45.744: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258418785s
Oct 12 18:26:47.795: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309421661s
STEP: Saw pod success
Oct 12 18:26:47.795: INFO: Pod "pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9" satisfied condition "Succeeded or Failed"
Oct 12 18:26:47.847: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9 container env-test: <nil>
STEP: delete the pod
Oct 12 18:26:47.962: INFO: Waiting for pod pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9 to disappear
Oct 12 18:26:48.013: INFO: Pod pod-configmaps-70cde726-3074-4403-bebc-395a4b0b50a9 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 52 lines ...
Oct 12 18:26:19.370: INFO: PersistentVolumeClaim pvc-wtq5z found but phase is Pending instead of Bound.
Oct 12 18:26:21.425: INFO: PersistentVolumeClaim pvc-wtq5z found and phase=Bound (2.104267523s)
Oct 12 18:26:21.425: INFO: Waiting up to 3m0s for PersistentVolume local-27dnm to have phase Bound
Oct 12 18:26:21.474: INFO: PersistentVolume local-27dnm found and phase=Bound (49.720848ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-p2f8
STEP: Creating a pod to test subpath
Oct 12 18:26:21.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p2f8" in namespace "provisioning-4717" to be "Succeeded or Failed"
Oct 12 18:26:21.699: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.771299ms
Oct 12 18:26:23.749: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100678496s
Oct 12 18:26:25.800: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151831308s
Oct 12 18:26:27.850: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202041247s
Oct 12 18:26:29.903: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254606361s
Oct 12 18:26:31.955: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306244036s
Oct 12 18:26:34.022: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.373460539s
Oct 12 18:26:36.073: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.425007877s
STEP: Saw pod success
Oct 12 18:26:36.073: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8" satisfied condition "Succeeded or Failed"
Oct 12 18:26:36.123: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-p2f8 container test-container-subpath-preprovisionedpv-p2f8: <nil>
STEP: delete the pod
Oct 12 18:26:36.246: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p2f8 to disappear
Oct 12 18:26:36.296: INFO: Pod pod-subpath-test-preprovisionedpv-p2f8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p2f8
Oct 12 18:26:36.296: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p2f8" in namespace "provisioning-4717"
STEP: Creating pod pod-subpath-test-preprovisionedpv-p2f8
STEP: Creating a pod to test subpath
Oct 12 18:26:36.399: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p2f8" in namespace "provisioning-4717" to be "Succeeded or Failed"
Oct 12 18:26:36.452: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 52.868526ms
Oct 12 18:26:38.504: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10466434s
Oct 12 18:26:40.557: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158375465s
Oct 12 18:26:42.609: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210301571s
Oct 12 18:26:44.664: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265281344s
Oct 12 18:26:46.715: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.316300934s
STEP: Saw pod success
Oct 12 18:26:46.715: INFO: Pod "pod-subpath-test-preprovisionedpv-p2f8" satisfied condition "Succeeded or Failed"
Oct 12 18:26:46.768: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-p2f8 container test-container-subpath-preprovisionedpv-p2f8: <nil>
STEP: delete the pod
Oct 12 18:26:46.873: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p2f8 to disappear
Oct 12 18:26:46.937: INFO: Pod pod-subpath-test-preprovisionedpv-p2f8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p2f8
Oct 12 18:26:46.937: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p2f8" in namespace "provisioning-4717"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:48.287: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6918" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:50.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4297" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:50.220: INFO: Only supported for providers [azure] (not aws)
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:50.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:50.863: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:51.056: INFO: Only supported for providers [vsphere] (not aws)
... skipping 20 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:51.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct 12 18:26:51.368: INFO: found topology map[topology.kubernetes.io/zone:us-west-1a]
Oct 12 18:26:51.368: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 12 18:26:51.368: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:54.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9676" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":14,"skipped":93,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:54.512: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":7,"skipped":80,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:46.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Oct 12 18:26:53.632: INFO: Creating a PV followed by a PVC
Oct 12 18:26:53.735: INFO: Waiting for PV local-pv6nrgm to bind to PVC pvc-ccv8j
Oct 12 18:26:53.735: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ccv8j] to have phase Bound
Oct 12 18:26:53.784: INFO: PersistentVolumeClaim pvc-ccv8j found and phase=Bound (49.477695ms)
Oct 12 18:26:53.785: INFO: Waiting up to 3m0s for PersistentVolume local-pv6nrgm to have phase Bound
Oct 12 18:26:53.835: INFO: PersistentVolume local-pv6nrgm found and phase=Bound (50.547127ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Oct 12 18:26:53.985: INFO: Waiting up to 5m0s for pod "pod-b892cff4-eed8-4886-881b-962282dfa435" in namespace "persistent-local-volumes-test-6443" to be "Unschedulable"
Oct 12 18:26:54.034: INFO: Pod "pod-b892cff4-eed8-4886-881b-962282dfa435": Phase="Pending", Reason="", readiness=false. Elapsed: 49.518182ms
Oct 12 18:26:54.035: INFO: Pod "pod-b892cff4-eed8-4886-881b-962282dfa435" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:7.998 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":8,"skipped":80,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:54.723: INFO: Only supported for providers [vsphere] (not aws)
... skipping 95 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Oct 12 18:26:46.498: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 18:26:46.498: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tm7x
STEP: Creating a pod to test subpath
Oct 12 18:26:46.552: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tm7x" in namespace "provisioning-5653" to be "Succeeded or Failed"
Oct 12 18:26:46.601: INFO: Pod "pod-subpath-test-inlinevolume-tm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 49.387727ms
Oct 12 18:26:48.651: INFO: Pod "pod-subpath-test-inlinevolume-tm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099220406s
Oct 12 18:26:50.702: INFO: Pod "pod-subpath-test-inlinevolume-tm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150542977s
Oct 12 18:26:52.752: INFO: Pod "pod-subpath-test-inlinevolume-tm7x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200555608s
Oct 12 18:26:54.803: INFO: Pod "pod-subpath-test-inlinevolume-tm7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.251433367s
STEP: Saw pod success
Oct 12 18:26:54.803: INFO: Pod "pod-subpath-test-inlinevolume-tm7x" satisfied condition "Succeeded or Failed"
Oct 12 18:26:54.853: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-tm7x container test-container-volume-inlinevolume-tm7x: <nil>
STEP: delete the pod
Oct 12 18:26:54.964: INFO: Waiting for pod pod-subpath-test-inlinevolume-tm7x to disappear
Oct 12 18:26:55.013: INFO: Pod pod-subpath-test-inlinevolume-tm7x no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tm7x
Oct 12 18:26:55.013: INFO: Deleting pod "pod-subpath-test-inlinevolume-tm7x" in namespace "provisioning-5653"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
Oct 12 18:26:29.213: INFO: Unable to read jessie_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.317: INFO: Unable to read jessie_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.367: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.468: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:29.785: INFO: Lookups using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2432 wheezy_tcp@dns-test-service.dns-2432 wheezy_udp@dns-test-service.dns-2432.svc wheezy_tcp@dns-test-service.dns-2432.svc wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2432 jessie_tcp@dns-test-service.dns-2432 jessie_udp@dns-test-service.dns-2432.svc jessie_tcp@dns-test-service.dns-2432.svc jessie_udp@_http._tcp.dns-test-service.dns-2432.svc jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc]

Oct 12 18:26:34.837: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:34.888: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:34.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:34.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.042: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
... skipping 5 lines ...
Oct 12 18:26:35.665: INFO: Unable to read jessie_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.716: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.769: INFO: Unable to read jessie_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.819: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.871: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:35.922: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:36.239: INFO: Lookups using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2432 wheezy_tcp@dns-test-service.dns-2432 wheezy_udp@dns-test-service.dns-2432.svc wheezy_tcp@dns-test-service.dns-2432.svc wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2432 jessie_tcp@dns-test-service.dns-2432 jessie_udp@dns-test-service.dns-2432.svc jessie_tcp@dns-test-service.dns-2432.svc jessie_udp@_http._tcp.dns-test-service.dns-2432.svc jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc]

Oct 12 18:26:39.838: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:39.888: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:39.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:39.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
... skipping 5 lines ...
Oct 12 18:26:40.659: INFO: Unable to read jessie_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.759: INFO: Unable to read jessie_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.810: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.880: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:40.935: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:41.241: INFO: Lookups using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2432 wheezy_tcp@dns-test-service.dns-2432 wheezy_udp@dns-test-service.dns-2432.svc wheezy_tcp@dns-test-service.dns-2432.svc wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2432 jessie_tcp@dns-test-service.dns-2432 jessie_udp@dns-test-service.dns-2432.svc jessie_tcp@dns-test-service.dns-2432.svc jessie_udp@_http._tcp.dns-test-service.dns-2432.svc jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc]

Oct 12 18:26:44.837: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:44.887: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:44.937: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:44.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.038: INFO: Unable to read wheezy_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
... skipping 5 lines ...
Oct 12 18:26:45.646: INFO: Unable to read jessie_udp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432 from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.746: INFO: Unable to read jessie_udp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.797: INFO: Unable to read jessie_tcp@dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.847: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:45.897: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:46.203: INFO: Lookups using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2432 wheezy_tcp@dns-test-service.dns-2432 wheezy_udp@dns-test-service.dns-2432.svc wheezy_tcp@dns-test-service.dns-2432.svc wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2432 jessie_tcp@dns-test-service.dns-2432 jessie_udp@dns-test-service.dns-2432.svc jessie_tcp@dns-test-service.dns-2432.svc jessie_udp@_http._tcp.dns-test-service.dns-2432.svc jessie_tcp@_http._tcp.dns-test-service.dns-2432.svc]

Oct 12 18:26:50.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:50.223: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc from pod dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da: the server could not find the requested resource (get pods dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da)
Oct 12 18:26:51.257: INFO: Lookups using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-2432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2432.svc]

Oct 12 18:26:56.230: INFO: DNS probes using dns-2432/dns-test-9ec13c97-5768-45fd-96dd-7e290d14e7da succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:38.743 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:56.632: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:57.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7466" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":7,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Oct 12 18:26:31.987: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.039: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.196: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.250: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.304: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.356: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:32.460: INFO: Lookups using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local]

Oct 12 18:26:37.518: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.635: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.691: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.847: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.899: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:37.951: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:38.008: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:38.117: INFO: Lookups using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local]

Oct 12 18:26:42.520: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.573: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.628: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.680: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.837: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.889: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.940: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:42.992: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:43.096: INFO: Lookups using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local]

Oct 12 18:26:47.512: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.564: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.616: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.675: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.833: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.885: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.938: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:47.990: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:48.108: INFO: Lookups using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local]

Oct 12 18:26:52.514: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.566: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.618: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.827: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.879: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.932: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:52.987: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local from pod dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166: the server could not find the requested resource (get pods dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166)
Oct 12 18:26:53.103: INFO: Lookups using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4904.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4904.svc.cluster.local jessie_udp@dns-test-service-2.dns-4904.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4904.svc.cluster.local]

Oct 12 18:26:58.101: INFO: DNS probes using dns-4904/dns-test-5be15df2-b5ae-4038-a498-dfe9e155a166 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:37.038 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:18.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:41.104 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":5,"skipped":26,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:48.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 12 18:26:48.396: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 18:26:48.505: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7114" in namespace "provisioning-7114" to be "Succeeded or Failed"
Oct 12 18:26:48.555: INFO: Pod "hostpath-symlink-prep-provisioning-7114": Phase="Pending", Reason="", readiness=false. Elapsed: 50.457903ms
Oct 12 18:26:50.607: INFO: Pod "hostpath-symlink-prep-provisioning-7114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102762351s
Oct 12 18:26:52.659: INFO: Pod "hostpath-symlink-prep-provisioning-7114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153982251s
STEP: Saw pod success
Oct 12 18:26:52.659: INFO: Pod "hostpath-symlink-prep-provisioning-7114" satisfied condition "Succeeded or Failed"
Oct 12 18:26:52.659: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7114" in namespace "provisioning-7114"
Oct 12 18:26:52.714: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7114" to be fully deleted
Oct 12 18:26:52.765: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-bftc
STEP: Creating a pod to test subpath
Oct 12 18:26:52.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-bftc" in namespace "provisioning-7114" to be "Succeeded or Failed"
Oct 12 18:26:52.871: INFO: Pod "pod-subpath-test-inlinevolume-bftc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.655831ms
Oct 12 18:26:54.922: INFO: Pod "pod-subpath-test-inlinevolume-bftc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102119994s
Oct 12 18:26:56.977: INFO: Pod "pod-subpath-test-inlinevolume-bftc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15654099s
STEP: Saw pod success
Oct 12 18:26:56.977: INFO: Pod "pod-subpath-test-inlinevolume-bftc" satisfied condition "Succeeded or Failed"
Oct 12 18:26:57.039: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-bftc container test-container-volume-inlinevolume-bftc: <nil>
STEP: delete the pod
Oct 12 18:26:57.189: INFO: Waiting for pod pod-subpath-test-inlinevolume-bftc to disappear
Oct 12 18:26:57.241: INFO: Pod pod-subpath-test-inlinevolume-bftc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-bftc
Oct 12 18:26:57.241: INFO: Deleting pod "pod-subpath-test-inlinevolume-bftc" in namespace "provisioning-7114"
STEP: Deleting pod
Oct 12 18:26:57.292: INFO: Deleting pod "pod-subpath-test-inlinevolume-bftc" in namespace "provisioning-7114"
Oct 12 18:26:57.396: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7114" in namespace "provisioning-7114" to be "Succeeded or Failed"
Oct 12 18:26:57.446: INFO: Pod "hostpath-symlink-prep-provisioning-7114": Phase="Pending", Reason="", readiness=false. Elapsed: 50.620686ms
Oct 12 18:26:59.497: INFO: Pod "hostpath-symlink-prep-provisioning-7114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.101570959s
STEP: Saw pod success
Oct 12 18:26:59.497: INFO: Pod "hostpath-symlink-prep-provisioning-7114" satisfied condition "Succeeded or Failed"
Oct 12 18:26:59.497: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7114" in namespace "provisioning-7114"
Oct 12 18:26:59.559: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7114" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:26:59.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7114" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:59.729: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
• [SLOW TEST:161.231 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:26:59.840: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":7,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:02.150: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:27:00.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa" in namespace "downward-api-1861" to be "Succeeded or Failed"
Oct 12 18:27:00.253: INFO: Pod "downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa": Phase="Pending", Reason="", readiness=false. Elapsed: 52.213747ms
Oct 12 18:27:02.304: INFO: Pod "downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10393045s
STEP: Saw pod success
Oct 12 18:27:02.305: INFO: Pod "downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa" satisfied condition "Succeeded or Failed"
Oct 12 18:27:02.355: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa container client-container: <nil>
STEP: delete the pod
Oct 12 18:27:02.467: INFO: Waiting for pod downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa to disappear
Oct 12 18:27:02.518: INFO: Pod downwardapi-volume-39ef2e3a-55ee-42bd-b710-a4e4db32eefa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:02.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1861" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:27:02.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8" in namespace "downward-api-3075" to be "Succeeded or Failed"
Oct 12 18:27:02.536: INFO: Pod "downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.523709ms
Oct 12 18:27:04.587: INFO: Pod "downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100944217s
STEP: Saw pod success
Oct 12 18:27:04.587: INFO: Pod "downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8" satisfied condition "Succeeded or Failed"
Oct 12 18:27:04.637: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8 container client-container: <nil>
STEP: delete the pod
Oct 12 18:27:04.791: INFO: Waiting for pod downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8 to disappear
Oct 12 18:27:04.855: INFO: Pod downwardapi-volume-0d325270-31eb-4b01-aba9-10a3be4605c8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:04.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3075" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:05.017: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Oct 12 18:27:02.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 12 18:27:02.961: INFO: Waiting up to 5m0s for pod "pod-1f416531-cae2-440a-bd34-68a7d49f054b" in namespace "emptydir-3233" to be "Succeeded or Failed"
Oct 12 18:27:03.012: INFO: Pod "pod-1f416531-cae2-440a-bd34-68a7d49f054b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.180824ms
Oct 12 18:27:05.086: INFO: Pod "pod-1f416531-cae2-440a-bd34-68a7d49f054b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.125825712s
STEP: Saw pod success
Oct 12 18:27:05.087: INFO: Pod "pod-1f416531-cae2-440a-bd34-68a7d49f054b" satisfied condition "Succeeded or Failed"
Oct 12 18:27:05.178: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-1f416531-cae2-440a-bd34-68a7d49f054b container test-container: <nil>
STEP: delete the pod
Oct 12 18:27:05.357: INFO: Waiting for pod pod-1f416531-cae2-440a-bd34-68a7d49f054b to disappear
Oct 12 18:27:05.422: INFO: Pod pod-1f416531-cae2-440a-bd34-68a7d49f054b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
• [SLOW TEST:8.380 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 181 lines ...
• [SLOW TEST:20.020 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Oct 12 18:26:20.527: INFO: PersistentVolumeClaim csi-hostpathbz8rl found but phase is Pending instead of Bound.
Oct 12 18:26:22.577: INFO: PersistentVolumeClaim csi-hostpathbz8rl found but phase is Pending instead of Bound.
Oct 12 18:26:24.627: INFO: PersistentVolumeClaim csi-hostpathbz8rl found but phase is Pending instead of Bound.
Oct 12 18:26:26.679: INFO: PersistentVolumeClaim csi-hostpathbz8rl found and phase=Bound (6.204198189s)
STEP: Creating pod pod-subpath-test-dynamicpv-jx7j
STEP: Creating a pod to test subpath
Oct 12 18:26:26.836: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jx7j" in namespace "provisioning-6536" to be "Succeeded or Failed"
Oct 12 18:26:26.888: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 52.117518ms
Oct 12 18:26:28.939: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103233486s
Oct 12 18:26:30.993: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157455409s
Oct 12 18:26:33.045: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208887791s
Oct 12 18:26:35.095: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259689226s
Oct 12 18:26:37.148: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311861283s
... skipping 3 lines ...
Oct 12 18:26:45.349: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 18.513261469s
Oct 12 18:26:47.399: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 20.563746028s
Oct 12 18:26:49.501: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 22.665003684s
Oct 12 18:26:51.551: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Pending", Reason="", readiness=false. Elapsed: 24.715291534s
Oct 12 18:26:53.604: INFO: Pod "pod-subpath-test-dynamicpv-jx7j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.767874264s
STEP: Saw pod success
Oct 12 18:26:53.604: INFO: Pod "pod-subpath-test-dynamicpv-jx7j" satisfied condition "Succeeded or Failed"
Oct 12 18:26:53.655: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-jx7j container test-container-subpath-dynamicpv-jx7j: <nil>
STEP: delete the pod
Oct 12 18:26:53.766: INFO: Waiting for pod pod-subpath-test-dynamicpv-jx7j to disappear
Oct 12 18:26:53.816: INFO: Pod pod-subpath-test-dynamicpv-jx7j no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jx7j
Oct 12 18:26:53.816: INFO: Deleting pod "pod-subpath-test-dynamicpv-jx7j" in namespace "provisioning-6536"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:08.671: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 120 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:09.670: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:05.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 12 18:27:05.459: INFO: Waiting up to 5m0s for pod "downward-api-f378d48b-75d0-4a01-acb2-d970f3121334" in namespace "downward-api-8060" to be "Succeeded or Failed"
Oct 12 18:27:05.517: INFO: Pod "downward-api-f378d48b-75d0-4a01-acb2-d970f3121334": Phase="Pending", Reason="", readiness=false. Elapsed: 57.94985ms
Oct 12 18:27:07.571: INFO: Pod "downward-api-f378d48b-75d0-4a01-acb2-d970f3121334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112117143s
Oct 12 18:27:09.622: INFO: Pod "downward-api-f378d48b-75d0-4a01-acb2-d970f3121334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162692884s
STEP: Saw pod success
Oct 12 18:27:09.622: INFO: Pod "downward-api-f378d48b-75d0-4a01-acb2-d970f3121334" satisfied condition "Succeeded or Failed"
Oct 12 18:27:09.671: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod downward-api-f378d48b-75d0-4a01-acb2-d970f3121334 container dapi-container: <nil>
STEP: delete the pod
Oct 12 18:27:09.798: INFO: Waiting for pod downward-api-f378d48b-75d0-4a01-acb2-d970f3121334 to disappear
Oct 12 18:27:09.851: INFO: Pod downward-api-f378d48b-75d0-4a01-acb2-d970f3121334 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:09.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8060" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":72,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 55 lines ...
• [SLOW TEST:14.791 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:10.037: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:5.770 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":8,"skipped":72,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:14.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7982" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:14.927: INFO: Only supported for providers [gce gke] (not aws)
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-9cad9e19-8c70-4f79-8600-dc605802e6cf
STEP: Creating a pod to test consume configMaps
Oct 12 18:27:10.405: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd" in namespace "projected-3199" to be "Succeeded or Failed"
Oct 12 18:27:10.454: INFO: Pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.140405ms
Oct 12 18:27:12.505: INFO: Pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099760483s
Oct 12 18:27:14.555: INFO: Pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150204786s
Oct 12 18:27:16.605: INFO: Pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.200261697s
STEP: Saw pod success
Oct 12 18:27:16.605: INFO: Pod "pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd" satisfied condition "Succeeded or Failed"
Oct 12 18:27:16.655: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:27:16.765: INFO: Waiting for pod pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd to disappear
Oct 12 18:27:16.814: INFO: Pod pod-projected-configmaps-3ba7241f-ab9f-4d26-968a-fcdea3b0e4fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.869 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":20,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:39.473: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Oct 12 18:26:48.628: INFO: PersistentVolumeClaim pvc-l8hp8 found but phase is Pending instead of Bound.
Oct 12 18:26:50.691: INFO: PersistentVolumeClaim pvc-l8hp8 found and phase=Bound (10.313949577s)
Oct 12 18:26:50.691: INFO: Waiting up to 3m0s for PersistentVolume aws-blglz to have phase Bound
Oct 12 18:26:50.740: INFO: PersistentVolume aws-blglz found and phase=Bound (49.278475ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-x6g2
STEP: Creating a pod to test exec-volume-test
Oct 12 18:26:50.889: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-x6g2" in namespace "volume-5046" to be "Succeeded or Failed"
Oct 12 18:26:50.944: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 54.074292ms
Oct 12 18:26:52.995: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105064187s
Oct 12 18:26:55.045: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155348598s
Oct 12 18:26:57.097: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207608876s
Oct 12 18:26:59.159: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269882827s
Oct 12 18:27:01.246: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.356396213s
STEP: Saw pod success
Oct 12 18:27:01.246: INFO: Pod "exec-volume-test-preprovisionedpv-x6g2" satisfied condition "Succeeded or Failed"
Oct 12 18:27:01.322: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-x6g2 container exec-container-preprovisionedpv-x6g2: <nil>
STEP: delete the pod
Oct 12 18:27:01.508: INFO: Waiting for pod exec-volume-test-preprovisionedpv-x6g2 to disappear
Oct 12 18:27:01.562: INFO: Pod exec-volume-test-preprovisionedpv-x6g2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-x6g2
Oct 12 18:27:01.562: INFO: Deleting pod "exec-volume-test-preprovisionedpv-x6g2" in namespace "volume-5046"
STEP: Deleting pv and pvc
Oct 12 18:27:01.619: INFO: Deleting PersistentVolumeClaim "pvc-l8hp8"
Oct 12 18:27:01.680: INFO: Deleting PersistentVolume "aws-blglz"
Oct 12 18:27:01.939: INFO: Couldn't delete PD "aws://us-west-1a/vol-0ac1783d223bd4c5f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ac1783d223bd4c5f is currently attached to i-0e4cc36ee7feab59e
	status code: 400, request id: 84d5121a-81e5-42e1-ae53-4c4603aff9aa
Oct 12 18:27:07.286: INFO: Couldn't delete PD "aws://us-west-1a/vol-0ac1783d223bd4c5f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ac1783d223bd4c5f is currently attached to i-0e4cc36ee7feab59e
	status code: 400, request id: 3cdf1b27-30f7-4e63-b741-d0f5f07104ed
Oct 12 18:27:12.666: INFO: Couldn't delete PD "aws://us-west-1a/vol-0ac1783d223bd4c5f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ac1783d223bd4c5f is currently attached to i-0e4cc36ee7feab59e
	status code: 400, request id: 01de1d39-0924-403f-9715-4e7a233ab1b1
Oct 12 18:27:18.026: INFO: Successfully deleted PD "aws://us-west-1a/vol-0ac1783d223bd4c5f".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:18.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5046" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:18.154: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
Oct 12 18:27:04.010: INFO: PersistentVolumeClaim pvc-bnlh9 found but phase is Pending instead of Bound.
Oct 12 18:27:06.060: INFO: PersistentVolumeClaim pvc-bnlh9 found and phase=Bound (10.336529162s)
Oct 12 18:27:06.060: INFO: Waiting up to 3m0s for PersistentVolume local-zdms6 to have phase Bound
Oct 12 18:27:06.109: INFO: PersistentVolume local-zdms6 found and phase=Bound (49.251011ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-dlzl
STEP: Creating a pod to test exec-volume-test
Oct 12 18:27:06.261: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dlzl" in namespace "volume-8922" to be "Succeeded or Failed"
Oct 12 18:27:06.314: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Pending", Reason="", readiness=false. Elapsed: 53.019961ms
Oct 12 18:27:08.365: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103663373s
Oct 12 18:27:10.416: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155458977s
Oct 12 18:27:12.466: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205452438s
Oct 12 18:27:14.517: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255871077s
Oct 12 18:27:16.571: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.31030251s
STEP: Saw pod success
Oct 12 18:27:16.571: INFO: Pod "exec-volume-test-preprovisionedpv-dlzl" satisfied condition "Succeeded or Failed"
Oct 12 18:27:16.621: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-dlzl container exec-container-preprovisionedpv-dlzl: <nil>
STEP: delete the pod
Oct 12 18:27:16.730: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dlzl to disappear
Oct 12 18:27:16.780: INFO: Pod exec-volume-test-preprovisionedpv-dlzl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-dlzl
Oct 12 18:27:16.780: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dlzl" in namespace "volume-8922"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:18.503: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 231 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":8,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1011
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":7,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:18.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:249
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:21.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6426" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":5,"skipped":27,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":65,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:26:32.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
Oct 12 18:26:48.401: INFO: PersistentVolume nfs-h7kp9 found and phase=Bound (49.259877ms)
Oct 12 18:26:48.450: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dhxxk] to have phase Bound
Oct 12 18:26:48.503: INFO: PersistentVolumeClaim pvc-dhxxk found and phase=Bound (52.780939ms)
STEP: Checking pod has write access to PersistentVolumes
Oct 12 18:26:48.552: INFO: Creating nfs test pod
Oct 12 18:26:48.603: INFO: Pod should terminate with exitcode 0 (success)
Oct 12 18:26:48.603: INFO: Waiting up to 5m0s for pod "pvc-tester-9cb6x" in namespace "pv-5726" to be "Succeeded or Failed"
Oct 12 18:26:48.652: INFO: Pod "pvc-tester-9cb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 49.038627ms
Oct 12 18:26:50.706: INFO: Pod "pvc-tester-9cb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102759763s
Oct 12 18:26:52.760: INFO: Pod "pvc-tester-9cb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156267051s
Oct 12 18:26:54.810: INFO: Pod "pvc-tester-9cb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206562043s
Oct 12 18:26:56.860: INFO: Pod "pvc-tester-9cb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257196286s
Oct 12 18:26:58.911: INFO: Pod "pvc-tester-9cb6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.307299079s
STEP: Saw pod success
Oct 12 18:26:58.911: INFO: Pod "pvc-tester-9cb6x" satisfied condition "Succeeded or Failed"
Oct 12 18:26:58.911: INFO: Pod pvc-tester-9cb6x succeeded 
Oct 12 18:26:58.911: INFO: Deleting pod "pvc-tester-9cb6x" in namespace "pv-5726"
Oct 12 18:26:58.971: INFO: Wait up to 5m0s for pod "pvc-tester-9cb6x" to be fully deleted
Oct 12 18:26:59.074: INFO: Creating nfs test pod
Oct 12 18:26:59.134: INFO: Pod should terminate with exitcode 0 (success)
Oct 12 18:26:59.134: INFO: Waiting up to 5m0s for pod "pvc-tester-l2jkr" in namespace "pv-5726" to be "Succeeded or Failed"
Oct 12 18:26:59.184: INFO: Pod "pvc-tester-l2jkr": Phase="Pending", Reason="", readiness=false. Elapsed: 50.188888ms
Oct 12 18:27:01.285: INFO: Pod "pvc-tester-l2jkr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150380814s
Oct 12 18:27:03.334: INFO: Pod "pvc-tester-l2jkr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200307198s
Oct 12 18:27:05.391: INFO: Pod "pvc-tester-l2jkr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256963824s
Oct 12 18:27:07.441: INFO: Pod "pvc-tester-l2jkr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.306962769s
STEP: Saw pod success
Oct 12 18:27:07.441: INFO: Pod "pvc-tester-l2jkr" satisfied condition "Succeeded or Failed"
Oct 12 18:27:07.441: INFO: Pod pvc-tester-l2jkr succeeded 
Oct 12 18:27:07.441: INFO: Deleting pod "pvc-tester-l2jkr" in namespace "pv-5726"
Oct 12 18:27:07.496: INFO: Wait up to 5m0s for pod "pvc-tester-l2jkr" to be fully deleted
Oct 12 18:27:07.598: INFO: Creating nfs test pod
Oct 12 18:27:07.658: INFO: Pod should terminate with exitcode 0 (success)
Oct 12 18:27:07.658: INFO: Waiting up to 5m0s for pod "pvc-tester-hpwkm" in namespace "pv-5726" to be "Succeeded or Failed"
Oct 12 18:27:07.708: INFO: Pod "pvc-tester-hpwkm": Phase="Pending", Reason="", readiness=false. Elapsed: 50.049003ms
Oct 12 18:27:09.759: INFO: Pod "pvc-tester-hpwkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10105967s
Oct 12 18:27:11.810: INFO: Pod "pvc-tester-hpwkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1518195s
Oct 12 18:27:13.861: INFO: Pod "pvc-tester-hpwkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202908625s
STEP: Saw pod success
Oct 12 18:27:13.861: INFO: Pod "pvc-tester-hpwkm" satisfied condition "Succeeded or Failed"
Oct 12 18:27:13.861: INFO: Pod pvc-tester-hpwkm succeeded 
Oct 12 18:27:13.861: INFO: Deleting pod "pvc-tester-hpwkm" in namespace "pv-5726"
Oct 12 18:27:13.917: INFO: Wait up to 5m0s for pod "pvc-tester-hpwkm" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Oct 12 18:27:14.065: INFO: Deleting PVC pvc-nz87n to trigger reclamation of PV nfs-tlqmm
Oct 12 18:27:14.065: INFO: Deleting PersistentVolumeClaim "pvc-nz87n"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":11,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:25.464: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 123 lines ...
• [SLOW TEST:11.204 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":9,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:25.754: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:08.101: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Oct 12 18:27:19.370: INFO: PersistentVolumeClaim pvc-ff6mx found but phase is Pending instead of Bound.
Oct 12 18:27:21.421: INFO: PersistentVolumeClaim pvc-ff6mx found and phase=Bound (10.308258693s)
Oct 12 18:27:21.421: INFO: Waiting up to 3m0s for PersistentVolume local-pk7t9 to have phase Bound
Oct 12 18:27:21.470: INFO: PersistentVolume local-pk7t9 found and phase=Bound (49.840036ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fhcd
STEP: Creating a pod to test exec-volume-test
Oct 12 18:27:21.623: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fhcd" in namespace "volume-8877" to be "Succeeded or Failed"
Oct 12 18:27:21.673: INFO: Pod "exec-volume-test-preprovisionedpv-fhcd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.505325ms
Oct 12 18:27:23.724: INFO: Pod "exec-volume-test-preprovisionedpv-fhcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100489184s
Oct 12 18:27:25.777: INFO: Pod "exec-volume-test-preprovisionedpv-fhcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153098474s
STEP: Saw pod success
Oct 12 18:27:25.777: INFO: Pod "exec-volume-test-preprovisionedpv-fhcd" satisfied condition "Succeeded or Failed"
Oct 12 18:27:25.832: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-fhcd container exec-container-preprovisionedpv-fhcd: <nil>
STEP: delete the pod
Oct 12 18:27:25.962: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fhcd to disappear
Oct 12 18:27:26.012: INFO: Pod exec-volume-test-preprovisionedpv-fhcd no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fhcd
Oct 12 18:27:26.012: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fhcd" in namespace "volume-8877"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:26.749: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:27:21.029: INFO: Waiting up to 5m0s for pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc" in namespace "projected-3202" to be "Succeeded or Failed"
Oct 12 18:27:21.084: INFO: Pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 55.241969ms
Oct 12 18:27:23.136: INFO: Pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107580253s
Oct 12 18:27:25.189: INFO: Pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159949663s
Oct 12 18:27:27.249: INFO: Pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.220345224s
STEP: Saw pod success
Oct 12 18:27:27.249: INFO: Pod "metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc" satisfied condition "Succeeded or Failed"
Oct 12 18:27:27.301: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc container client-container: <nil>
STEP: delete the pod
Oct 12 18:27:27.423: INFO: Waiting for pod metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc to disappear
Oct 12 18:27:27.475: INFO: Pod metadata-volume-0a1e94b4-c3d3-471b-889d-1de6d96e1cbc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.877 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":115,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:30.162: INFO: Only supported for providers [openstack] (not aws)
... skipping 117 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:426

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":12,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:32.920: INFO: Only supported for providers [vsphere] (not aws)
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:913
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":10,"skipped":84,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:33.117: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 214 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:33.236: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
• [SLOW TEST:34.267 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:582
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:34.133: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:35.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-9777" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct 12 18:27:02.683: INFO: PersistentVolumeClaim pvc-prcdj found but phase is Pending instead of Bound.
Oct 12 18:27:04.735: INFO: PersistentVolumeClaim pvc-prcdj found and phase=Bound (4.153807184s)
Oct 12 18:27:04.735: INFO: Waiting up to 3m0s for PersistentVolume local-gtw7p to have phase Bound
Oct 12 18:27:04.791: INFO: PersistentVolume local-gtw7p found and phase=Bound (56.335684ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2kzs
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 18:27:04.985: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2kzs" in namespace "provisioning-7906" to be "Succeeded or Failed"
Oct 12 18:27:05.039: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 52.786967ms
Oct 12 18:27:07.091: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104637433s
Oct 12 18:27:09.156: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170046619s
Oct 12 18:27:11.208: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 6.221873079s
Oct 12 18:27:13.261: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 8.274632636s
Oct 12 18:27:15.313: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 10.327374928s
... skipping 4 lines ...
Oct 12 18:27:25.575: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 20.589116234s
Oct 12 18:27:27.627: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 22.641172425s
Oct 12 18:27:29.679: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 24.693203634s
Oct 12 18:27:31.731: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Running", Reason="", readiness=true. Elapsed: 26.745412606s
Oct 12 18:27:33.783: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.796620621s
STEP: Saw pod success
Oct 12 18:27:33.783: INFO: Pod "pod-subpath-test-preprovisionedpv-2kzs" satisfied condition "Succeeded or Failed"
Oct 12 18:27:33.834: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-2kzs container test-container-subpath-preprovisionedpv-2kzs: <nil>
STEP: delete the pod
Oct 12 18:27:33.947: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2kzs to disappear
Oct 12 18:27:33.997: INFO: Pod pod-subpath-test-preprovisionedpv-2kzs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2kzs
Oct 12 18:27:33.998: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2kzs" in namespace "provisioning-7906"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":15,"skipped":99,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":7,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:37.014: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-2b27fb86-5d0f-4304-9fe3-0ad3fb85a668
STEP: Creating a pod to test consume configMaps
Oct 12 18:27:30.551: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672" in namespace "projected-6931" to be "Succeeded or Failed"
Oct 12 18:27:30.603: INFO: Pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672": Phase="Pending", Reason="", readiness=false. Elapsed: 51.18293ms
Oct 12 18:27:32.654: INFO: Pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102150664s
Oct 12 18:27:34.739: INFO: Pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188037829s
Oct 12 18:27:36.790: INFO: Pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238769791s
STEP: Saw pod success
Oct 12 18:27:36.790: INFO: Pod "pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672" satisfied condition "Succeeded or Failed"
Oct 12 18:27:36.841: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct 12 18:27:36.950: INFO: Waiting for pod pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672 to disappear
Oct 12 18:27:37.001: INFO: Pod pod-projected-configmaps-20ed70f8-a5d3-4545-81b1-55615da78672 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 52 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-5283
STEP: Deleting pod verify-service-up-exec-pod-9nt6j in namespace services-5283
STEP: verifying service-headless is not up
Oct 12 18:27:05.954: INFO: Creating new host exec pod
Oct 12 18:27:06.059: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 12 18:27:08.110: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 12 18:27:08.110: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed'
Oct 12 18:27:10.758: INFO: rc: 28
Oct 12 18:27:10.758: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed" in pod services-5283/verify-service-down-host-exec-pod: error running /tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.128.132:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5283
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct 12 18:27:10.921: INFO: Creating new host exec pod
Oct 12 18:27:11.022: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 12 18:27:13.074: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 12 18:27:13.074: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.141.135:80 && echo service-down-failed'
Oct 12 18:27:15.764: INFO: rc: 28
Oct 12 18:27:15.764: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.141.135:80 && echo service-down-failed" in pod services-5283/verify-service-down-host-exec-pod: error running /tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.141.135:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.70.141.135:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5283
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct 12 18:27:15.924: INFO: Creating new host exec pod
... skipping 15 lines ...
STEP: verifying service-headless is still not up
Oct 12 18:27:28.061: INFO: Creating new host exec pod
Oct 12 18:27:28.161: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 12 18:27:30.211: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 12 18:27:32.212: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 12 18:27:34.217: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 12 18:27:34.217: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed'
Oct 12 18:27:37.089: INFO: rc: 28
Oct 12 18:27:37.089: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed" in pod services-5283/verify-service-down-host-exec-pod: error running /tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-5283 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.128.132:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.128.132:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5283
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:37.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:71.247 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1937
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:9.734 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":11,"skipped":99,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:42.988: INFO: Driver local doesn't support ext3 -- skipping
... skipping 153 lines ...
• [SLOW TEST:10.794 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":6,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:8.627 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":105,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:44.899: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 113 lines ...
Oct 12 18:27:32.621: INFO: PersistentVolumeClaim pvc-j4htp found but phase is Pending instead of Bound.
Oct 12 18:27:34.684: INFO: PersistentVolumeClaim pvc-j4htp found and phase=Bound (12.369434234s)
Oct 12 18:27:34.684: INFO: Waiting up to 3m0s for PersistentVolume local-hr567 to have phase Bound
Oct 12 18:27:34.747: INFO: PersistentVolume local-hr567 found and phase=Bound (62.464367ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cfxz
STEP: Creating a pod to test subpath
Oct 12 18:27:34.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cfxz" in namespace "provisioning-7586" to be "Succeeded or Failed"
Oct 12 18:27:34.961: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 52.785974ms
Oct 12 18:27:37.012: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103218171s
Oct 12 18:27:39.063: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154222654s
Oct 12 18:27:41.113: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204301533s
Oct 12 18:27:43.163: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254505738s
Oct 12 18:27:45.213: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.304507137s
Oct 12 18:27:47.264: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.355466694s
STEP: Saw pod success
Oct 12 18:27:47.264: INFO: Pod "pod-subpath-test-preprovisionedpv-cfxz" satisfied condition "Succeeded or Failed"
Oct 12 18:27:47.313: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-cfxz container test-container-volume-preprovisionedpv-cfxz: <nil>
STEP: delete the pod
Oct 12 18:27:47.421: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cfxz to disappear
Oct 12 18:27:47.471: INFO: Pod pod-subpath-test-preprovisionedpv-cfxz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cfxz
Oct 12 18:27:47.471: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cfxz" in namespace "provisioning-7586"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:49.159: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 169 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:517
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":7,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Oct 12 18:27:51.048: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 107 lines ...
Oct 12 18:27:32.752: INFO: PersistentVolumeClaim pvc-xkh4z found but phase is Pending instead of Bound.
Oct 12 18:27:34.803: INFO: PersistentVolumeClaim pvc-xkh4z found and phase=Bound (8.254172532s)
Oct 12 18:27:34.803: INFO: Waiting up to 3m0s for PersistentVolume local-4vk4r to have phase Bound
Oct 12 18:27:34.857: INFO: PersistentVolume local-4vk4r found and phase=Bound (54.081502ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4xxv
STEP: Creating a pod to test subpath
Oct 12 18:27:35.021: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4xxv" in namespace "provisioning-7344" to be "Succeeded or Failed"
Oct 12 18:27:35.071: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 50.253578ms
Oct 12 18:27:37.121: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100635686s
Oct 12 18:27:39.174: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153748062s
Oct 12 18:27:41.232: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211580741s
Oct 12 18:27:43.283: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262110311s
Oct 12 18:27:45.333: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312466196s
Oct 12 18:27:47.383: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362202295s
Oct 12 18:27:49.434: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.413039417s
STEP: Saw pod success
Oct 12 18:27:49.434: INFO: Pod "pod-subpath-test-preprovisionedpv-4xxv" satisfied condition "Succeeded or Failed"
Oct 12 18:27:49.486: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-4xxv container test-container-volume-preprovisionedpv-4xxv: <nil>
STEP: delete the pod
Oct 12 18:27:49.607: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4xxv to disappear
Oct 12 18:27:49.656: INFO: Pod pod-subpath-test-preprovisionedpv-4xxv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4xxv
Oct 12 18:27:49.657: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4xxv" in namespace "provisioning-7344"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:51.320: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:27:51.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-8318" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":7,"skipped":58,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:52.054: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75
STEP: Creating configMap with name configmap-test-volume-044731e8-b6c5-420c-a61d-4a44ed4bd11d
STEP: Creating a pod to test consume configMaps
Oct 12 18:27:43.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d" in namespace "configmap-3010" to be "Succeeded or Failed"
Oct 12 18:27:43.538: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.929985ms
Oct 12 18:27:45.589: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103573841s
Oct 12 18:27:47.643: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157191003s
Oct 12 18:27:49.696: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210104436s
Oct 12 18:27:51.749: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263158561s
Oct 12 18:27:53.802: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.316069153s
STEP: Saw pod success
Oct 12 18:27:53.802: INFO: Pod "pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d" satisfied condition "Succeeded or Failed"
Oct 12 18:27:53.854: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:27:53.963: INFO: Waiting for pod pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d to disappear
Oct 12 18:27:54.014: INFO: Pod pod-configmaps-fae8ed3a-b90d-4359-b734-36bbd30f0f4d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:54.667: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 12 18:27:49.478: INFO: Waiting up to 5m0s for pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e" in namespace "emptydir-5121" to be "Succeeded or Failed"
Oct 12 18:27:49.527: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.510746ms
Oct 12 18:27:51.629: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150611621s
Oct 12 18:27:53.679: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201021431s
Oct 12 18:27:55.731: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253344139s
Oct 12 18:27:57.788: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.309604177s
STEP: Saw pod success
Oct 12 18:27:57.788: INFO: Pod "pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e" satisfied condition "Succeeded or Failed"
Oct 12 18:27:57.838: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e container test-container: <nil>
STEP: delete the pod
Oct 12 18:27:57.945: INFO: Waiting for pod pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e to disappear
Oct 12 18:27:57.994: INFO: Pod pod-5cd4e433-95ec-4119-9d44-f85f7e214f2e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:27:58.117: INFO: Only supported for providers [openstack] (not aws)
... skipping 80 lines ...
• [SLOW TEST:16.565 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":17,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:01.588: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-6d8e6675-614f-45f2-91f1-bf371ce28c7c
STEP: Creating a pod to test consume configMaps
Oct 12 18:27:55.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49" in namespace "configmap-5875" to be "Succeeded or Failed"
Oct 12 18:27:55.115: INFO: Pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49": Phase="Pending", Reason="", readiness=false. Elapsed: 51.933865ms
Oct 12 18:27:57.171: INFO: Pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108469148s
Oct 12 18:27:59.225: INFO: Pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161696798s
Oct 12 18:28:01.277: INFO: Pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213875693s
STEP: Saw pod success
Oct 12 18:28:01.277: INFO: Pod "pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49" satisfied condition "Succeeded or Failed"
Oct 12 18:28:01.329: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:01.451: INFO: Waiting for pod pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49 to disappear
Oct 12 18:28:01.503: INFO: Pod pod-configmaps-3189db23-bd43-441e-a941-6cc4c136ea49 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.912 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:01.636: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:28:02.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2983" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":18,"skipped":127,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":123,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:54.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Oct 12 18:27:54.459: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-7938" to be "Succeeded or Failed"
Oct 12 18:27:54.511: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 51.743617ms
Oct 12 18:27:56.563: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103597135s
Oct 12 18:27:58.616: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156372406s
Oct 12 18:28:00.668: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208381196s
Oct 12 18:28:02.720: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.260693689s
Oct 12 18:28:02.720: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:28:02.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7938" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":13,"skipped":123,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:05.200: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 276 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:05.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 12 18:28:05.989: INFO: Waiting up to 5m0s for pod "pod-3a68b974-f962-461d-b7f7-41ca1c88be42" in namespace "emptydir-8526" to be "Succeeded or Failed"
Oct 12 18:28:06.039: INFO: Pod "pod-3a68b974-f962-461d-b7f7-41ca1c88be42": Phase="Pending", Reason="", readiness=false. Elapsed: 49.483849ms
Oct 12 18:28:08.089: INFO: Pod "pod-3a68b974-f962-461d-b7f7-41ca1c88be42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.099669768s
STEP: Saw pod success
Oct 12 18:28:08.089: INFO: Pod "pod-3a68b974-f962-461d-b7f7-41ca1c88be42" satisfied condition "Succeeded or Failed"
Oct 12 18:28:08.143: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-3a68b974-f962-461d-b7f7-41ca1c88be42 container test-container: <nil>
STEP: delete the pod
Oct 12 18:28:08.251: INFO: Waiting for pod pod-3a68b974-f962-461d-b7f7-41ca1c88be42 to disappear
Oct 12 18:28:08.301: INFO: Pod pod-3a68b974-f962-461d-b7f7-41ca1c88be42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
Oct 12 18:28:01.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 12 18:28:02.053: INFO: Waiting up to 5m0s for pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64" in namespace "emptydir-7092" to be "Succeeded or Failed"
Oct 12 18:28:02.106: INFO: Pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64": Phase="Pending", Reason="", readiness=false. Elapsed: 52.28963ms
Oct 12 18:28:04.158: INFO: Pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104407933s
Oct 12 18:28:06.210: INFO: Pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15646969s
Oct 12 18:28:08.264: INFO: Pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.20994907s
STEP: Saw pod success
Oct 12 18:28:08.264: INFO: Pod "pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64" satisfied condition "Succeeded or Failed"
Oct 12 18:28:08.316: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64 container test-container: <nil>
STEP: delete the pod
Oct 12 18:28:08.443: INFO: Waiting for pod pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64 to disappear
Oct 12 18:28:08.495: INFO: Pod pod-61d0a62c-0a10-45b1-bc89-3a9d5c6bfe64 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.920 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":8,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:09.854: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":4,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:10.792: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 82 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:11.049: INFO: Only supported for providers [gce gke] (not aws)
... skipping 81 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":76,"failed":0}
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:08.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-1703/configmap-test-0f3aaf3e-3242-4329-98a0-09a72984c6eb
STEP: Creating a pod to test consume configMaps
Oct 12 18:28:08.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a" in namespace "configmap-1703" to be "Succeeded or Failed"
Oct 12 18:28:08.819: INFO: Pod "pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.709369ms
Oct 12 18:28:10.870: INFO: Pod "pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100545557s
STEP: Saw pod success
Oct 12 18:28:10.870: INFO: Pod "pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a" satisfied condition "Succeeded or Failed"
Oct 12 18:28:10.919: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a container env-test: <nil>
STEP: delete the pod
Oct 12 18:28:11.026: INFO: Waiting for pod pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a to disappear
Oct 12 18:28:11.076: INFO: Pod pod-configmaps-42b22c1e-1d1f-4ba0-a584-be2378896e9a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 25 lines ...
• [SLOW TEST:11.152 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:57
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":14,"skipped":126,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:14.097: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":13,"skipped":91,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:14.280: INFO: Only supported for providers [azure] (not aws)
... skipping 111 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:28:11.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181" in namespace "downward-api-3812" to be "Succeeded or Failed"
Oct 12 18:28:11.167: INFO: Pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181": Phase="Pending", Reason="", readiness=false. Elapsed: 54.329842ms
Oct 12 18:28:13.218: INFO: Pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10523467s
Oct 12 18:28:15.269: INFO: Pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156093183s
Oct 12 18:28:17.320: INFO: Pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207124695s
STEP: Saw pod success
Oct 12 18:28:17.320: INFO: Pod "downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181" satisfied condition "Succeeded or Failed"
Oct 12 18:28:17.370: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181 container client-container: <nil>
STEP: delete the pod
Oct 12 18:28:17.488: INFO: Waiting for pod downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181 to disappear
Oct 12 18:28:17.540: INFO: Pod downwardapi-volume-ad1fd1e3-1237-4dba-94cc-7568399d3181 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.842 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":122,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:37.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:19.886: INFO: Only supported for providers [openstack] (not aws)
... skipping 67 lines ...
Oct 12 18:28:14.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct 12 18:28:14.419: INFO: Waiting up to 5m0s for pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce" in namespace "security-context-4592" to be "Succeeded or Failed"
Oct 12 18:28:14.471: INFO: Pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce": Phase="Pending", Reason="", readiness=false. Elapsed: 51.368265ms
Oct 12 18:28:16.523: INFO: Pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103455785s
Oct 12 18:28:18.576: INFO: Pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156257047s
Oct 12 18:28:20.629: INFO: Pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209993938s
STEP: Saw pod success
Oct 12 18:28:20.630: INFO: Pod "security-context-97c1ede1-b4ed-4c3b-b836-053617016fce" satisfied condition "Succeeded or Failed"
Oct 12 18:28:20.683: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod security-context-97c1ede1-b4ed-4c3b-b836-053617016fce container test-container: <nil>
STEP: delete the pod
Oct 12 18:28:20.813: INFO: Waiting for pod security-context-97c1ede1-b4ed-4c3b-b836-053617016fce to disappear
Oct 12 18:28:20.865: INFO: Pod security-context-97c1ede1-b4ed-4c3b-b836-053617016fce no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.872 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":15,"skipped":133,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:21.000: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 221 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":10,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:22.278: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
• [SLOW TEST:31.701 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":91,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:22.870: INFO: Only supported for providers [openstack] (not aws)
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:28:22.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5568" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":11,"skipped":43,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:23.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 181 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":10,"skipped":77,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:23.764: INFO: Only supported for providers [vsphere] (not aws)
... skipping 83 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:27:38.329: INFO: >>> kubeConfig: /root/.kube/config
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:205

  Only supported for providers [gce] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:24.180: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 115 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 50 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Driver "aws" does not support cloning - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:241
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":13,"skipped":86,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:23.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:5.827 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":14,"skipped":86,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:29.321: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Oct 12 18:28:24.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 12 18:28:24.984: INFO: Waiting up to 5m0s for pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7" in namespace "security-context-9081" to be "Succeeded or Failed"
Oct 12 18:28:25.034: INFO: Pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7": Phase="Pending", Reason="", readiness=false. Elapsed: 50.207371ms
Oct 12 18:28:27.085: INFO: Pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10130671s
Oct 12 18:28:29.137: INFO: Pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152741488s
Oct 12 18:28:31.188: INFO: Pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203908219s
STEP: Saw pod success
Oct 12 18:28:31.188: INFO: Pod "security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7" satisfied condition "Succeeded or Failed"
Oct 12 18:28:31.238: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7 container test-container: <nil>
STEP: delete the pod
Oct 12 18:28:31.348: INFO: Waiting for pod security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7 to disappear
Oct 12 18:28:31.397: INFO: Pod security-context-ecac9beb-979e-4723-9ae9-e19633cdcfb7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.825 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":10,"skipped":84,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":9,"skipped":80,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:31.540: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 65 lines ...
• [SLOW TEST:13.563 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:32.263: INFO: Only supported for providers [azure] (not aws)
... skipping 23 lines ...
Oct 12 18:28:02.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
Oct 12 18:28:02.403: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 18:28:02.510: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6700" in namespace "provisioning-6700" to be "Succeeded or Failed"
Oct 12 18:28:02.562: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 52.597451ms
Oct 12 18:28:04.614: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104595879s
Oct 12 18:28:06.666: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156409628s
STEP: Saw pod success
Oct 12 18:28:06.666: INFO: Pod "hostpath-symlink-prep-provisioning-6700" satisfied condition "Succeeded or Failed"
Oct 12 18:28:06.666: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6700" in namespace "provisioning-6700"
Oct 12 18:28:06.729: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6700" to be fully deleted
Oct 12 18:28:06.779: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fzh7
Oct 12 18:28:16.934: INFO: Running '/tmp/kubectl1775810352/kubectl --server=https://api.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-6700 exec pod-subpath-test-inlinevolume-fzh7 --container test-container-volume-inlinevolume-fzh7 -- /bin/sh -c rm -r /test-volume/provisioning-6700'
Oct 12 18:28:17.687: INFO: stderr: ""
Oct 12 18:28:17.687: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-fzh7
Oct 12 18:28:17.687: INFO: Deleting pod "pod-subpath-test-inlinevolume-fzh7" in namespace "provisioning-6700"
Oct 12 18:28:17.739: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-fzh7" to be fully deleted
STEP: Deleting pod
Oct 12 18:28:23.847: INFO: Deleting pod "pod-subpath-test-inlinevolume-fzh7" in namespace "provisioning-6700"
Oct 12 18:28:23.960: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6700" in namespace "provisioning-6700" to be "Succeeded or Failed"
Oct 12 18:28:24.011: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 50.715108ms
Oct 12 18:28:26.063: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102555386s
Oct 12 18:28:28.116: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155817198s
Oct 12 18:28:30.172: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211868967s
Oct 12 18:28:32.223: INFO: Pod "hostpath-symlink-prep-provisioning-6700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.262976425s
STEP: Saw pod success
Oct 12 18:28:32.224: INFO: Pod "hostpath-symlink-prep-provisioning-6700" satisfied condition "Succeeded or Failed"
Oct 12 18:28:32.224: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6700" in namespace "provisioning-6700"
Oct 12 18:28:32.279: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6700" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:28:32.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6700" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":19,"skipped":129,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:32.463: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 218 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":8,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct 12 18:28:19.018: INFO: PersistentVolumeClaim pvc-pfhzn found but phase is Pending instead of Bound.
Oct 12 18:28:21.071: INFO: PersistentVolumeClaim pvc-pfhzn found and phase=Bound (2.107935788s)
Oct 12 18:28:21.071: INFO: Waiting up to 3m0s for PersistentVolume local-c98q2 to have phase Bound
Oct 12 18:28:21.126: INFO: PersistentVolume local-c98q2 found and phase=Bound (55.253508ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j2jp
STEP: Creating a pod to test subpath
Oct 12 18:28:21.289: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j2jp" in namespace "provisioning-3550" to be "Succeeded or Failed"
Oct 12 18:28:21.344: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Pending", Reason="", readiness=false. Elapsed: 55.651556ms
Oct 12 18:28:23.402: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11352931s
Oct 12 18:28:25.455: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166739483s
Oct 12 18:28:27.513: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223993426s
Oct 12 18:28:29.569: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280245853s
Oct 12 18:28:31.623: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.334162526s
STEP: Saw pod success
Oct 12 18:28:31.623: INFO: Pod "pod-subpath-test-preprovisionedpv-j2jp" satisfied condition "Succeeded or Failed"
Oct 12 18:28:31.677: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-j2jp container test-container-volume-preprovisionedpv-j2jp: <nil>
STEP: delete the pod
Oct 12 18:28:31.802: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j2jp to disappear
Oct 12 18:28:31.862: INFO: Pod pod-subpath-test-preprovisionedpv-j2jp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j2jp
Oct 12 18:28:31.863: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j2jp" in namespace "provisioning-3550"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:24.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-e02de100-e11b-4a49-8f3e-55ba05ed61e2
STEP: Creating a pod to test consume secrets
Oct 12 18:28:24.617: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718" in namespace "projected-272" to be "Succeeded or Failed"
Oct 12 18:28:24.667: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Pending", Reason="", readiness=false. Elapsed: 50.079668ms
Oct 12 18:28:26.718: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100938635s
Oct 12 18:28:28.768: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151702683s
Oct 12 18:28:30.821: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204038722s
Oct 12 18:28:32.872: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255550767s
Oct 12 18:28:34.922: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305455891s
STEP: Saw pod success
Oct 12 18:28:34.922: INFO: Pod "pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718" satisfied condition "Succeeded or Failed"
Oct 12 18:28:34.977: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718 container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 18:28:35.092: INFO: Waiting for pod pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718 to disappear
Oct 12 18:28:35.141: INFO: Pod pod-projected-secrets-6306fe6e-01fd-41f2-9e31-a520549f2718 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.005 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:35.263: INFO: Only supported for providers [vsphere] (not aws)
... skipping 146 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":9,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:36.911: INFO: Only supported for providers [azure] (not aws)
... skipping 129 lines ...
Oct 12 18:27:39.420: INFO: PersistentVolumeClaim csi-hostpathbss7j found but phase is Pending instead of Bound.
Oct 12 18:27:41.478: INFO: PersistentVolumeClaim csi-hostpathbss7j found but phase is Pending instead of Bound.
Oct 12 18:27:43.531: INFO: PersistentVolumeClaim csi-hostpathbss7j found but phase is Pending instead of Bound.
Oct 12 18:27:45.584: INFO: PersistentVolumeClaim csi-hostpathbss7j found and phase=Bound (6.214879554s)
STEP: Creating pod pod-subpath-test-dynamicpv-4424
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 18:27:45.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-4424" in namespace "provisioning-6106" to be "Succeeded or Failed"
Oct 12 18:27:45.792: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 51.282024ms
Oct 12 18:27:47.845: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104653123s
Oct 12 18:27:49.898: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157183705s
Oct 12 18:27:51.950: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20944336s
Oct 12 18:27:54.003: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262338269s
Oct 12 18:27:56.056: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Pending", Reason="", readiness=false. Elapsed: 10.316067531s
... skipping 6 lines ...
Oct 12 18:28:10.432: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Running", Reason="", readiness=true. Elapsed: 24.691791943s
Oct 12 18:28:12.485: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Running", Reason="", readiness=true. Elapsed: 26.744562322s
Oct 12 18:28:14.538: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Running", Reason="", readiness=true. Elapsed: 28.797458068s
Oct 12 18:28:16.591: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Running", Reason="", readiness=true. Elapsed: 30.85057588s
Oct 12 18:28:18.697: INFO: Pod "pod-subpath-test-dynamicpv-4424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.956491512s
STEP: Saw pod success
Oct 12 18:28:18.697: INFO: Pod "pod-subpath-test-dynamicpv-4424" satisfied condition "Succeeded or Failed"
Oct 12 18:28:18.807: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-4424 container test-container-subpath-dynamicpv-4424: <nil>
STEP: delete the pod
Oct 12 18:28:19.018: INFO: Waiting for pod pod-subpath-test-dynamicpv-4424 to disappear
Oct 12 18:28:19.071: INFO: Pod pod-subpath-test-dynamicpv-4424 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-4424
Oct 12 18:28:19.071: INFO: Deleting pod "pod-subpath-test-dynamicpv-4424" in namespace "provisioning-6106"
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:38.812: INFO: Only supported for providers [azure] (not aws)
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:40.722: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:11.523 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":15,"skipped":90,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:32.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-0a0e09bd-b056-45fe-adff-f73208222ab8
STEP: Creating a pod to test consume configMaps
Oct 12 18:28:32.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a" in namespace "projected-143" to be "Succeeded or Failed"
Oct 12 18:28:32.968: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.411595ms
Oct 12 18:28:35.019: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102353117s
Oct 12 18:28:37.074: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156877685s
Oct 12 18:28:39.129: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211950744s
Oct 12 18:28:41.186: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.268663827s
STEP: Saw pod success
Oct 12 18:28:41.186: INFO: Pod "pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a" satisfied condition "Succeeded or Failed"
Oct 12 18:28:41.237: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:41.347: INFO: Waiting for pod pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a to disappear
Oct 12 18:28:41.397: INFO: Pod pod-projected-configmaps-485f4953-d55a-4d95-95cc-18cfdde55d6a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.948 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":148,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 18:28:33.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b" in namespace "projected-6577" to be "Succeeded or Failed"
Oct 12 18:28:33.940: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.105019ms
Oct 12 18:28:35.992: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104296139s
Oct 12 18:28:38.047: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158853844s
Oct 12 18:28:40.101: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213259654s
Oct 12 18:28:42.153: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.265313382s
STEP: Saw pod success
Oct 12 18:28:42.153: INFO: Pod "downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b" satisfied condition "Succeeded or Failed"
Oct 12 18:28:42.205: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b container client-container: <nil>
STEP: delete the pod
Oct 12 18:28:42.315: INFO: Waiting for pod downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b to disappear
Oct 12 18:28:42.370: INFO: Pod downwardapi-volume-6973d8e1-1d27-4fc3-951b-f66cb909d26b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.905 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 12 18:28:31.780: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 18:28:31.780: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8s28
STEP: Creating a pod to test subpath
Oct 12 18:28:31.860: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8s28" in namespace "provisioning-4046" to be "Succeeded or Failed"
Oct 12 18:28:31.911: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Pending", Reason="", readiness=false. Elapsed: 51.556714ms
Oct 12 18:28:33.964: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104083868s
Oct 12 18:28:36.015: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155572329s
Oct 12 18:28:38.066: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206141247s
Oct 12 18:28:40.117: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257667127s
Oct 12 18:28:42.169: INFO: Pod "pod-subpath-test-inlinevolume-8s28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309726517s
STEP: Saw pod success
Oct 12 18:28:42.170: INFO: Pod "pod-subpath-test-inlinevolume-8s28" satisfied condition "Succeeded or Failed"
Oct 12 18:28:42.220: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-8s28 container test-container-volume-inlinevolume-8s28: <nil>
STEP: delete the pod
Oct 12 18:28:42.326: INFO: Waiting for pod pod-subpath-test-inlinevolume-8s28 to disappear
Oct 12 18:28:42.376: INFO: Pod pod-subpath-test-inlinevolume-8s28 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8s28
Oct 12 18:28:42.376: INFO: Deleting pod "pod-subpath-test-inlinevolume-8s28" in namespace "provisioning-4046"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:42.589: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:11.189: INFO: >>> kubeConfig: /root/.kube/config
... skipping 24 lines ...
Oct 12 18:28:33.959: INFO: PersistentVolumeClaim pvc-sxbcv found but phase is Pending instead of Bound.
Oct 12 18:28:36.009: INFO: PersistentVolumeClaim pvc-sxbcv found and phase=Bound (16.52657746s)
Oct 12 18:28:36.009: INFO: Waiting up to 3m0s for PersistentVolume local-vkgt5 to have phase Bound
Oct 12 18:28:36.060: INFO: PersistentVolume local-vkgt5 found and phase=Bound (50.533189ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-f7xp
STEP: Creating a pod to test exec-volume-test
Oct 12 18:28:36.210: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-f7xp" in namespace "volume-277" to be "Succeeded or Failed"
Oct 12 18:28:36.259: INFO: Pod "exec-volume-test-preprovisionedpv-f7xp": Phase="Pending", Reason="", readiness=false. Elapsed: 49.551291ms
Oct 12 18:28:38.309: INFO: Pod "exec-volume-test-preprovisionedpv-f7xp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099510488s
Oct 12 18:28:40.363: INFO: Pod "exec-volume-test-preprovisionedpv-f7xp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152962752s
STEP: Saw pod success
Oct 12 18:28:40.363: INFO: Pod "exec-volume-test-preprovisionedpv-f7xp" satisfied condition "Succeeded or Failed"
Oct 12 18:28:40.413: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-f7xp container exec-container-preprovisionedpv-f7xp: <nil>
STEP: delete the pod
Oct 12 18:28:40.522: INFO: Waiting for pod exec-volume-test-preprovisionedpv-f7xp to disappear
Oct 12 18:28:40.575: INFO: Pod exec-volume-test-preprovisionedpv-f7xp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-f7xp
Oct 12 18:28:40.576: INFO: Deleting pod "exec-volume-test-preprovisionedpv-f7xp" in namespace "volume-277"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":76,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:10.386 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:44.336: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 61 lines ...
Oct 12 18:28:33.041: INFO: PersistentVolumeClaim pvc-rsqm6 found but phase is Pending instead of Bound.
Oct 12 18:28:35.094: INFO: PersistentVolumeClaim pvc-rsqm6 found and phase=Bound (6.207931383s)
Oct 12 18:28:35.094: INFO: Waiting up to 3m0s for PersistentVolume local-rn65c to have phase Bound
Oct 12 18:28:35.143: INFO: PersistentVolume local-rn65c found and phase=Bound (49.321287ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mbw6
STEP: Creating a pod to test subpath
Oct 12 18:28:35.294: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mbw6" in namespace "provisioning-511" to be "Succeeded or Failed"
Oct 12 18:28:35.344: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.754375ms
Oct 12 18:28:37.394: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100291408s
Oct 12 18:28:39.444: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150021184s
Oct 12 18:28:41.495: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200953373s
Oct 12 18:28:43.545: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.251013059s
STEP: Saw pod success
Oct 12 18:28:43.545: INFO: Pod "pod-subpath-test-preprovisionedpv-mbw6" satisfied condition "Succeeded or Failed"
Oct 12 18:28:43.595: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-mbw6 container test-container-volume-preprovisionedpv-mbw6: <nil>
STEP: delete the pod
Oct 12 18:28:43.702: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mbw6 to disappear
Oct 12 18:28:43.751: INFO: Pod pod-subpath-test-preprovisionedpv-mbw6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mbw6
Oct 12 18:28:43.751: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mbw6" in namespace "provisioning-511"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:44.636: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:28:45.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-9060" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":10,"skipped":97,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:7.430 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:13.805 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":12,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:49.148: INFO: Driver local doesn't support ext4 -- skipping
... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":12,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:49.752: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 21 lines ...
• [SLOW TEST:34.264 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:51.990: INFO: Only supported for providers [azure] (not aws)
... skipping 56 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1711
STEP: Waiting until pod test-pod will start running in namespace statefulset-1711
STEP: Creating statefulset with conflicting port in namespace statefulset-1711
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1711
Oct 12 18:28:26.492: INFO: Observed stateful pod in namespace: statefulset-1711, name: ss-0, uid: b3601bf6-a9f1-4d2b-9d94-c2bac5230b6c, status phase: Pending. Waiting for statefulset controller to delete.
Oct 12 18:28:32.371: INFO: Observed stateful pod in namespace: statefulset-1711, name: ss-0, uid: b3601bf6-a9f1-4d2b-9d94-c2bac5230b6c, status phase: Failed. Waiting for statefulset controller to delete.
Oct 12 18:28:32.378: INFO: Observed stateful pod in namespace: statefulset-1711, name: ss-0, uid: b3601bf6-a9f1-4d2b-9d94-c2bac5230b6c, status phase: Failed. Waiting for statefulset controller to delete.
Oct 12 18:28:32.381: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1711
STEP: Removing pod with conflicting port in namespace statefulset-1711
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1711 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Oct 12 18:28:42.739: INFO: Deleting all statefulset in ns statefulset-1711
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":9,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:28:21.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Oct 12 18:28:21.357: INFO: Waiting up to 5m0s for pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" in namespace "svcaccounts-2667" to be "Succeeded or Failed"
Oct 12 18:28:21.409: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 51.430675ms
Oct 12 18:28:23.463: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105695976s
Oct 12 18:28:25.516: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158786741s
STEP: Saw pod success
Oct 12 18:28:25.516: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" satisfied condition "Succeeded or Failed"
Oct 12 18:28:25.569: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:25.681: INFO: Waiting for pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc to disappear
Oct 12 18:28:25.732: INFO: Pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc no longer exists
STEP: Creating a pod to test service account token: 
Oct 12 18:28:25.784: INFO: Waiting up to 5m0s for pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" in namespace "svcaccounts-2667" to be "Succeeded or Failed"
Oct 12 18:28:25.836: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 51.630098ms
Oct 12 18:28:27.891: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106486011s
Oct 12 18:28:29.958: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173422766s
Oct 12 18:28:32.010: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225557527s
Oct 12 18:28:34.063: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278568818s
Oct 12 18:28:36.114: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.330159602s
STEP: Saw pod success
Oct 12 18:28:36.115: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" satisfied condition "Succeeded or Failed"
Oct 12 18:28:36.167: INFO: Trying to get logs from node ip-172-20-37-53.us-west-1.compute.internal pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:36.277: INFO: Waiting for pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc to disappear
Oct 12 18:28:36.328: INFO: Pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc no longer exists
STEP: Creating a pod to test service account token: 
Oct 12 18:28:36.382: INFO: Waiting up to 5m0s for pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" in namespace "svcaccounts-2667" to be "Succeeded or Failed"
Oct 12 18:28:36.433: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 51.460288ms
Oct 12 18:28:38.485: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103475491s
Oct 12 18:28:40.538: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156185886s
Oct 12 18:28:42.592: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209940052s
Oct 12 18:28:44.645: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.263287264s
STEP: Saw pod success
Oct 12 18:28:44.645: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" satisfied condition "Succeeded or Failed"
Oct 12 18:28:44.697: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:44.804: INFO: Waiting for pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc to disappear
Oct 12 18:28:44.855: INFO: Pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc no longer exists
STEP: Creating a pod to test service account token: 
Oct 12 18:28:44.908: INFO: Waiting up to 5m0s for pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" in namespace "svcaccounts-2667" to be "Succeeded or Failed"
Oct 12 18:28:44.961: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.746908ms
Oct 12 18:28:47.014: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10561486s
Oct 12 18:28:49.079: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17045365s
Oct 12 18:28:51.131: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222388764s
Oct 12 18:28:53.183: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.274505128s
Oct 12 18:28:55.235: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327133069s
STEP: Saw pod success
Oct 12 18:28:55.235: INFO: Pod "test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc" satisfied condition "Succeeded or Failed"
Oct 12 18:28:55.287: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc container agnhost-container: <nil>
STEP: delete the pod
Oct 12 18:28:55.396: INFO: Waiting for pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc to disappear
Oct 12 18:28:55.447: INFO: Pod test-pod-c6ced7a4-ef2e-4827-87cf-774d6e643cbc no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:34.519 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":16,"skipped":141,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:55.590: INFO: Only supported for providers [azure] (not aws)
... skipping 87 lines ...
Oct 12 18:28:48.379: INFO: PersistentVolumeClaim pvc-hdcbs found but phase is Pending instead of Bound.
Oct 12 18:28:50.435: INFO: PersistentVolumeClaim pvc-hdcbs found and phase=Bound (8.265222254s)
Oct 12 18:28:50.435: INFO: Waiting up to 3m0s for PersistentVolume local-2b76b to have phase Bound
Oct 12 18:28:50.486: INFO: PersistentVolume local-2b76b found and phase=Bound (51.700777ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h9zj
STEP: Creating a pod to test subpath
Oct 12 18:28:50.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h9zj" in namespace "provisioning-4143" to be "Succeeded or Failed"
Oct 12 18:28:50.699: INFO: Pod "pod-subpath-test-preprovisionedpv-h9zj": Phase="Pending", Reason="", readiness=false. Elapsed: 51.614466ms
Oct 12 18:28:52.752: INFO: Pod "pod-subpath-test-preprovisionedpv-h9zj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104588426s
Oct 12 18:28:54.854: INFO: Pod "pod-subpath-test-preprovisionedpv-h9zj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207140873s
Oct 12 18:28:56.907: INFO: Pod "pod-subpath-test-preprovisionedpv-h9zj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259941358s
STEP: Saw pod success
Oct 12 18:28:56.907: INFO: Pod "pod-subpath-test-preprovisionedpv-h9zj" satisfied condition "Succeeded or Failed"
Oct 12 18:28:56.958: INFO: Trying to get logs from node ip-172-20-47-26.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-h9zj container test-container-volume-preprovisionedpv-h9zj: <nil>
STEP: delete the pod
Oct 12 18:28:57.069: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h9zj to disappear
Oct 12 18:28:57.121: INFO: Pod pod-subpath-test-preprovisionedpv-h9zj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h9zj
Oct 12 18:28:57.121: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h9zj" in namespace "provisioning-4143"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:28:57.959: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 139 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-2stcd webserver-deployment-795d758f88- deployment-2199  e5d60770-8f30-454d-bac0-2e8959f5667c 14359 0 2021-10-12 18:28:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e4167 0xc0052e4168}] []  [{kube-controller-manager Update v1 2021-10-12 18:28:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rmkrd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmkrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-56-153.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.456: INFO: Pod "webserver-deployment-795d758f88-6rcq9" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-6rcq9 webserver-deployment-795d758f88- deployment-2199  b2096a40-ee70-4c5e-bac2-896af73e1888 14385 0 2021-10-12 18:28:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e42d0 0xc0052e42d1}] []  [{kube-controller-manager Update v1 2021-10-12 18:28:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5cc8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cc8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-56-153.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.457: INFO: Pod "webserver-deployment-795d758f88-c85zb" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-c85zb webserver-deployment-795d758f88- deployment-2199  c39a236f-4a9a-4ed9-8e2b-3ebf400c8e90 14477 0 2021-10-12 18:29:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e4430 0xc0052e4431}] []  [{kube-controller-manager Update v1 2021-10-12 18:29:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sww5p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sww5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-37-53.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:29:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.457: INFO: Pod "webserver-deployment-795d758f88-d77qp" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-d77qp webserver-deployment-795d758f88- deployment-2199  13db13a5-b2a9-4025-82ae-f677ae966ed9 14486 0 2021-10-12 18:28:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e4590 0xc0052e4591}] []  [{kube-controller-manager Update v1 2021-10-12 18:28:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-10-12 18:29:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wpvpd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpvpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-37-53.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:28:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.37.53,PodIP:100.96.4.91,StartTime:2021-10-12 18:28:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.458: INFO: Pod "webserver-deployment-795d758f88-dmlsq" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-dmlsq webserver-deployment-795d758f88- deployment-2199  4be79e17-a1fb-4df3-a4db-9c636ac24307 14508 0 2021-10-12 18:29:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e4790 0xc0052e4791}] []  [{kube-controller-manager Update v1 2021-10-12 18:29:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nswj8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nswj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-56-153.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:29:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.458: INFO: Pod "webserver-deployment-795d758f88-frp8s" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-frp8s webserver-deployment-795d758f88- deployment-2199  f0f4394f-4fcb-4566-b494-479797a637f0 14490 0 2021-10-12 18:29:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e48f0 0xc0052e48f1}] []  [{kube-controller-manager Update v1 2021-10-12 18:29:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2nl6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2nl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-37-53.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:29:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 12 18:29:01.459: INFO: Pod "webserver-deployment-795d758f88-ftm4g" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-ftm4g webserver-deployment-795d758f88- deployment-2199  9d6d7694-2d30-418a-a01c-45af8afa6032 14469 0 2021-10-12 18:29:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 1a3cbd1e-a29d-410f-a8c2-492f039e1229 0xc0052e4a50 0xc0052e4a51}] []  [{kube-controller-manager Update v1 2021-10-12 18:29:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1a3cbd1e-a29d-410f-a8c2-492f039e1229\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9nt52,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9nt52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-47-26.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-12 18:29:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 58 lines ...
• [SLOW TEST:18.934 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":95,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:29:01.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 125 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":16,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:20.342 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":10,"skipped":62,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:29:02.907: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Oct 12 18:28:52.371: INFO: Waiting up to 5m0s for pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14" in namespace "emptydir-2493" to be "Succeeded or Failed"
Oct 12 18:28:52.420: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Pending", Reason="", readiness=false. Elapsed: 49.456892ms
Oct 12 18:28:54.478: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107528078s
Oct 12 18:28:56.529: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158700993s
Oct 12 18:28:58.588: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217671773s
Oct 12 18:29:00.641: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270088503s
Oct 12 18:29:02.692: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321011669s
STEP: Saw pod success
Oct 12 18:29:02.692: INFO: Pod "pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14" satisfied condition "Succeeded or Failed"
Oct 12 18:29:02.742: INFO: Trying to get logs from node ip-172-20-59-223.us-west-1.compute.internal pod pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14 container test-container: <nil>
STEP: delete the pod
Oct 12 18:29:02.855: INFO: Waiting for pod pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14 to disappear
Oct 12 18:29:02.904: INFO: Pod pod-6875d015-6c8e-4af7-ba9d-57ffaa9efa14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":7,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 416 lines ...
• [SLOW TEST:85.185 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:288
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":7,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:29:08.973: INFO: Only supported for providers [azure] (not aws)
... skipping 32 lines ...
Oct 12 18:28:31.841: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1665lfpdc
STEP: creating a claim
Oct 12 18:28:31.891: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-gmr6
STEP: Creating a pod to test subpath
Oct 12 18:28:32.045: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gmr6" in namespace "provisioning-1665" to be "Succeeded or Failed"
Oct 12 18:28:32.095: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.651313ms
Oct 12 18:28:34.145: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100018357s
Oct 12 18:28:36.196: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150924728s
Oct 12 18:28:38.247: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201651717s
Oct 12 18:28:40.297: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.25173981s
Oct 12 18:28:42.348: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302690001s
Oct 12 18:28:44.400: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.355211986s
Oct 12 18:28:46.451: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.406121152s
Oct 12 18:28:48.502: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.456775375s
Oct 12 18:28:50.552: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.507232624s
Oct 12 18:28:52.604: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.558942581s
Oct 12 18:28:54.656: INFO: Pod "pod-subpath-test-dynamicpv-gmr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.610360254s
STEP: Saw pod success
Oct 12 18:28:54.656: INFO: Pod "pod-subpath-test-dynamicpv-gmr6" satisfied condition "Succeeded or Failed"
Oct 12 18:28:54.721: INFO: Trying to get logs from node ip-172-20-56-153.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-gmr6 container test-container-volume-dynamicpv-gmr6: <nil>
STEP: delete the pod
Oct 12 18:28:54.915: INFO: Waiting for pod pod-subpath-test-dynamicpv-gmr6 to disappear
Oct 12 18:28:54.987: INFO: Pod pod-subpath-test-dynamicpv-gmr6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-gmr6
Oct 12 18:28:54.987: INFO: Deleting pod "pod-subpath-test-dynamicpv-gmr6" in namespace "provisioning-1665"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":90,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 18:29:03.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Oct 12 18:29:03.337: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f" in namespace "security-context-test-7281" to be "Succeeded or Failed"
Oct 12 18:29:03.387: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.853457ms
Oct 12 18:29:05.438: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100798504s
Oct 12 18:29:07.492: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154500235s
Oct 12 18:29:09.543: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205423478s
Oct 12 18:29:11.593: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255287217s
Oct 12 18:29:13.643: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.306200342s
Oct 12 18:29:13.644: INFO: Pod "alpine-nnp-nil-b6a02848-789d-4b31-bf3a-adeb79c35a7f" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 18:29:13.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7281" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 18:29:13.836: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 42190 lines ...






b7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-4rdls\"\nE1012 18:35:24.150840       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-4993/pvc-5fthd: storageclass.storage.k8s.io \"provisioning-4993\" not found\nI1012 18:35:24.151073       1 event.go:291] \"Event occurred\" object=\"provisioning-4993/pvc-5fthd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4993\\\" not found\"\nI1012 18:35:24.208828       1 pv_controller.go:879] volume \"local-g6hxk\" entered phase \"Available\"\nI1012 18:35:24.613207       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=5 creating=1\nI1012 18:35:24.642302       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-m6hnn\"\nI1012 18:35:24.725558       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=5 creating=1\nI1012 18:35:24.749323       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-rtjvv\"\nI1012 18:35:24.811689       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=5 creating=1\nI1012 18:35:24.836551       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-r4zbz\"\nI1012 18:35:24.914097       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=5 creating=1\nI1012 18:35:24.922870       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-86f449785c-9j58k\"\nI1012 18:35:25.022753       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7473/pvc-mrgkr\"\nI1012 18:35:25.050521       1 pv_controller.go:640] volume \"pvc-1c6d2f5c-07e7-461a-8ed9-78e2755c6cbc\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:25.067297       1 pv_controller.go:879] volume \"pvc-1c6d2f5c-07e7-461a-8ed9-78e2755c6cbc\" entered phase \"Released\"\nI1012 18:35:25.079916       1 pv_controller.go:1340] isVolumeReleased[pvc-1c6d2f5c-07e7-461a-8ed9-78e2755c6cbc]: volume is released\nI1012 18:35:25.117199       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7473/pvc-mrgkr\" was already processed\nE1012 18:35:25.218076       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:25.315807       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:25.498251       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:25.575380       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:25.772959       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:25.845613       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:25.975924       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:26.008174       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:26.203065       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:26.223162       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:26.556686       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:26.558902       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:26.646383       1 namespace_controller.go:162] deletion of namespace apply-7370 failed: unexpected items still remain in namespace: apply-7370 for gvr: /v1, Resource=pods\nI1012 18:35:26.717376       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=4 deleting=1\nI1012 18:35:26.717946       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4591/webserver-86f449785c\" relatedReplicaSets=[webserver-847dcfb7fb webserver-86f449785c]\nI1012 18:35:26.718218       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-86f449785c\" pod=\"deployment-4591/webserver-86f449785c-9j58k\"\nI1012 18:35:26.717888       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-86f449785c to 4\"\nI1012 18:35:26.726738       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=6 creating=1\nI1012 18:35:26.730342       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 6\"\nI1012 18:35:26.749595       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-rjmd2\"\nI1012 18:35:26.749623       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-86f449785c-9j58k\"\nI1012 18:35:26.768587       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4591/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1012 18:35:26.923213       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:26.928585       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:27.406363       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:27.407477       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nI1012 18:35:28.013488       1 namespace_controller.go:185] Namespace has been deleted services-2016\nI1012 18:35:28.124066       1 aws.go:4725] Ignoring DependencyViolation while deleting load-balancer security group (sg-064971a6bdceb817d), assuming because LB is in process of deleting\nI1012 18:35:28.124092       1 aws.go:4749] Waiting for load-balancer to delete so we can delete security groups: test-rolling-update-with-lb\nI1012 18:35:28.256748       1 namespace_controller.go:185] Namespace has been deleted sysctl-9192\nE1012 18:35:28.282238       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:28.290019       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nI1012 18:35:28.411478       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=6 creating=1\nI1012 18:35:28.416538       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-nt6jt\"\nI1012 18:35:28.479759       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=6 creating=1\nI1012 18:35:28.485595       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-4gvs4\"\nI1012 18:35:28.536462       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=4 creating=1\nI1012 18:35:28.545718       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-86f449785c-vzvrw\"\nE1012 18:35:28.636979       1 tokens_controller.go:262] error synchronizing serviceaccount services-5483/default: secrets \"default-token-hs2jd\" is forbidden: unable to create new content in namespace services-5483 because it is being terminated\nI1012 18:35:29.000128       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-925/sample-webhook-deployment\"\nI1012 18:35:29.021106       1 event.go:291] \"Event occurred\" object=\"provisioning-6739/aws2455t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:35:29.133381       1 event.go:291] \"Event occurred\" object=\"provisioning-6739/aws2455t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:35:29.133410       1 event.go:291] \"Event occurred\" object=\"provisioning-6739/aws2455t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:35:29.327155       1 namespace_controller.go:185] Namespace has been deleted container-probe-4507\nI1012 18:35:29.395009       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7792/pvc-px28r\"\nI1012 18:35:29.400954       1 pv_controller.go:640] volume \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:29.408476       1 pv_controller.go:879] volume \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\" entered phase \"Released\"\nI1012 18:35:29.413326       1 pv_controller.go:1340] isVolumeReleased[pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066]: volume is released\nW1012 18:35:29.541811       1 reconciler.go:335] Multi-Attach error for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-47-26.us-west-1.compute.internal and can't be attached to another\nI1012 18:35:29.541983       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-376/pod-be9c49d1-437a-41ed-a89b-db017d983373\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1012 18:35:29.625039       1 pv_controller_base.go:505] deletion of claim \"volumemode-5158/pvc-qrzhr\" was already processed\nE1012 18:35:29.926534       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:29.928399       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:29.959845       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:35:30.007218       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-vqvfm\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0fa2249b1d755df3c\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:35:30.034148       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-3192/inline-volume-zbl55-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1012 18:35:30.035079       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-zbl55-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1012 18:35:30.180966       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-3192, name: inline-volume-zbl55, uid: df25de11-e0af-4856-8136-f2083a0595ee] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:35:30.181730       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-zbl55-my-volume\" objectUID=dea81277-2d4f-4bf1-be94-abbf1653c77f kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:35:30.181884       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-zbl55\" objectUID=df25de11-e0af-4856-8136-f2083a0595ee kind=\"Pod\" virtual=false\nI1012 18:35:30.184605       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-3192, name: inline-volume-zbl55-my-volume, uid: dea81277-2d4f-4bf1-be94-abbf1653c77f] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-3192, name: inline-volume-zbl55, uid: df25de11-e0af-4856-8136-f2083a0595ee] is deletingDependents\nI1012 18:35:30.185426       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-3192/inline-volume-zbl55-my-volume\" objectUID=dea81277-2d4f-4bf1-be94-abbf1653c77f kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE1012 18:35:30.188115       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-3192/inline-volume-zbl55-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1012 18:35:30.188688       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-zbl55-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1012 18:35:30.190850       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-zbl55-my-volume\" objectUID=dea81277-2d4f-4bf1-be94-abbf1653c77f kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:35:30.191069       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-3192/inline-volume-zbl55-my-volume\"\nI1012 18:35:30.195903       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-zbl55\" objectUID=df25de11-e0af-4856-8136-f2083a0595ee kind=\"Pod\" virtual=false\nI1012 18:35:30.197498       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-3192, name: inline-volume-zbl55, uid: df25de11-e0af-4856-8136-f2083a0595ee]\nI1012 18:35:30.231218       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-86f449785c to 3\"\nI1012 18:35:30.231601       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=3 deleting=1\nI1012 18:35:30.231634       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4591/webserver-86f449785c\" relatedReplicaSets=[webserver-847dcfb7fb webserver-86f449785c]\nI1012 18:35:30.231881       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-86f449785c\" pod=\"deployment-4591/webserver-86f449785c-vzvrw\"\nI1012 18:35:30.243725       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=7 creating=1\nI1012 18:35:30.244817       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 7\"\nI1012 18:35:30.249133       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-86f449785c-vzvrw\"\nI1012 18:35:30.251223       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-h75m6\"\nI1012 18:35:30.255280       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4591/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:30.288744       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4591/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:30.294945       1 namespace_controller.go:185] Namespace has been deleted job-6170\nI1012 18:35:30.573887       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-2473-1840/csi-hostpathplugin-7c77c74c64\" objectUID=f533951b-db3c-41cd-a1ff-db86a244e09e kind=\"ControllerRevision\" virtual=false\nI1012 18:35:30.574309       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-2473-1840/csi-hostpathplugin\nI1012 18:35:30.574407       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-2473-1840/csi-hostpathplugin-0\" objectUID=b0e5d910-beff-422c-b667-285a28376142 kind=\"Pod\" virtual=false\nI1012 18:35:30.580516       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-2473-1840/csi-hostpathplugin-0\" objectUID=b0e5d910-beff-422c-b667-285a28376142 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:30.580517       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-2473-1840/csi-hostpathplugin-7c77c74c64\" objectUID=f533951b-db3c-41cd-a1ff-db86a244e09e kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:35:30.860047       1 namespace_controller.go:185] Namespace has been deleted containers-5090\nI1012 18:35:30.870929       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:30.874219       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:35:31.211710       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8118/pvc-f7kzk: storageclass.storage.k8s.io \"provisioning-8118\" not found\nI1012 18:35:31.212020       1 event.go:291] \"Event occurred\" object=\"provisioning-8118/pvc-f7kzk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8118\\\" not found\"\nI1012 18:35:31.266305       1 pv_controller.go:879] volume \"local-h7hsp\" entered phase \"Available\"\nI1012 18:35:31.426255       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-86f449785c to 2\"\nI1012 18:35:31.426502       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=2 deleting=1\nI1012 18:35:31.426671       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4591/webserver-86f449785c\" relatedReplicaSets=[webserver-86f449785c webserver-847dcfb7fb]\nI1012 18:35:31.426865       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-86f449785c\" pod=\"deployment-4591/webserver-86f449785c-hbgg6\"\nI1012 18:35:31.435711       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4591/webserver-847dcfb7fb\" need=8 creating=1\nI1012 18:35:31.436204       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-847dcfb7fb to 8\"\nI1012 18:35:31.437939       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-86f449785c-hbgg6\"\nI1012 18:35:31.453748       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-847dcfb7fb-d6qvr\"\nI1012 18:35:31.466927       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4591/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:31.568447       1 garbagecollector.go:471] \"Processing object\" object=\"services-84/affinity-clusterip-s9jdz\" objectUID=5c920bfe-c05c-4be7-8f86-b73f85b23fdc kind=\"Pod\" virtual=false\nI1012 18:35:31.569058       1 garbagecollector.go:471] \"Processing object\" object=\"services-84/affinity-clusterip-dfrzj\" objectUID=d43901b4-3300-4534-b744-077b341ef909 kind=\"Pod\" virtual=false\nI1012 18:35:31.569145       1 garbagecollector.go:471] \"Processing object\" object=\"services-84/affinity-clusterip-b5jq6\" objectUID=e0f9095c-e203-4915-9236-236c9246fa36 kind=\"Pod\" virtual=false\nI1012 18:35:31.571826       1 garbagecollector.go:580] \"Deleting object\" object=\"services-84/affinity-clusterip-b5jq6\" objectUID=e0f9095c-e203-4915-9236-236c9246fa36 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:31.572056       1 garbagecollector.go:580] \"Deleting object\" object=\"services-84/affinity-clusterip-dfrzj\" objectUID=d43901b4-3300-4534-b744-077b341ef909 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:31.572418       1 garbagecollector.go:580] \"Deleting object\" object=\"services-84/affinity-clusterip-s9jdz\" objectUID=5c920bfe-c05c-4be7-8f86-b73f85b23fdc kind=\"Pod\" propagationPolicy=Background\nE1012 18:35:32.014375       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:35:32.028390       1 namespace_controller.go:162] deletion of namespace apply-7370 failed: unexpected items still remain in namespace: apply-7370 for gvr: /v1, Resource=pods\nI1012 18:35:32.291983       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7792^4\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:35:32.300056       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7792^4\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nE1012 18:35:32.432303       1 pv_protection_controller.go:118] PV pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\": the object has been modified; please apply your changes to the latest version and try again\nI1012 18:35:32.437433       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=1 deleting=1\nI1012 18:35:32.437467       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4591/webserver-86f449785c\" relatedReplicaSets=[webserver-847dcfb7fb webserver-86f449785c]\nI1012 18:35:32.438901       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-86f449785c\" pod=\"deployment-4591/webserver-86f449785c-j77z6\"\nI1012 18:35:32.439400       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-86f449785c to 1\"\nI1012 18:35:32.444033       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7792/pvc-px28r\" was already processed\nE1012 18:35:32.454119       1 pv_protection_controller.go:118] PV pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\": StorageError: invalid object, Code: 4, Key: /registry/persistentvolumes/pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 151b861a-f17f-48d6-9bc4-430311ec7054, UID in object meta: \nI1012 18:35:32.465987       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-86f449785c-j77z6\"\nI1012 18:35:32.518719       1 pv_controller.go:879] volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" entered phase \"Bound\"\nI1012 18:35:32.519618       1 pv_controller.go:982] volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" bound to claim \"provisioning-6739/aws2455t\"\nI1012 18:35:32.536196       1 pv_controller.go:823] claim \"provisioning-6739/aws2455t\" entered phase \"Bound\"\nI1012 18:35:32.555437       1 garbagecollector.go:471] \"Processing object\" object=\"svc-latency-6228/svc-latency-rc-gdc4c\" objectUID=6120c87a-ae74-4299-b39b-a429f3de3291 kind=\"Pod\" virtual=false\nI1012 18:35:32.559250       1 garbagecollector.go:580] \"Deleting object\" object=\"svc-latency-6228/svc-latency-rc-gdc4c\" objectUID=6120c87a-ae74-4299-b39b-a429f3de3291 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:32.712027       1 garbagecollector.go:471] \"Processing object\" object=\"services-1654/nodeport-test-kp8hq\" objectUID=d57307a6-d2dd-4143-91b5-9019d3964cc1 kind=\"EndpointSlice\" virtual=false\nI1012 18:35:32.726777       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1654/nodeport-test-kp8hq\" objectUID=d57307a6-d2dd-4143-91b5-9019d3964cc1 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:35:32.732516       1 namespace_controller.go:185] Namespace has been deleted volume-expand-2473\nI1012 18:35:32.835454       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-8d8dea99-e9c0-49cc-b75d-d60d90af8066\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7792^4\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:35:32.839819       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192-2886/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:35:32.985547       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-bgcck to be scheduled\"\nI1012 18:35:32.993321       1 garbagecollector.go:471] \"Processing object\" object=\"services-1654/nodeport-test-9thjp\" objectUID=e859dbf6-e3ad-49fe-aa11-6a8039c7d1a3 kind=\"Pod\" virtual=false\nI1012 18:35:32.993686       1 garbagecollector.go:471] \"Processing object\" object=\"services-1654/nodeport-test-jbjgr\" objectUID=36d64acd-958a-457b-98d2-39a2463dcd79 kind=\"Pod\" virtual=false\nI1012 18:35:32.999696       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1654/nodeport-test-9thjp\" objectUID=e859dbf6-e3ad-49fe-aa11-6a8039c7d1a3 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:32.999868       1 garbagecollector.go:580] \"Deleting object\" object=\"services-1654/nodeport-test-jbjgr\" objectUID=36d64acd-958a-457b-98d2-39a2463dcd79 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:33.038873       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"svc-latency-6228/latency-svc-88vbv\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-88vbv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-88vbv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: cf655be4-47b3-4290-9b67-8711db95e632, UID in object meta: \"\nI1012 18:35:33.041289       1 event.go:291] \"Event occurred\" object=\"svc-latency-6228/latency-svc-88vbv\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-6228/latency-svc-88vbv: Operation cannot be fulfilled on endpoints \\\"latency-svc-88vbv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-88vbv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: cf655be4-47b3-4290-9b67-8711db95e632, UID in object meta: \"\nE1012 18:35:33.047160       1 tokens_controller.go:262] error synchronizing serviceaccount services-1654/default: secrets \"default-token-n9tc2\" is forbidden: unable to create new content in namespace services-1654 because it is being terminated\nE1012 18:35:33.078579       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:33.091702       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:33.128581       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:35:33.216936       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09ca24a10595e130d\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nE1012 18:35:33.260299       1 namespace_controller.go:162] deletion of namespace services-1654 failed: unexpected items still remain in namespace: services-1654 for gvr: /v1, Resource=pods\nE1012 18:35:33.336462       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-88vbv.16ad5c4ebc1f5f55\", GenerateName:\"\", Namespace:\"svc-latency-6228\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-6228\", Name:\"latency-svc-88vbv\", UID:\"cf655be4-47b3-4290-9b67-8711db95e632\", APIVersion:\"v1\", ResourceVersion:\"31336\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-6228/latency-svc-88vbv: Operation cannot be fulfilled on endpoints \\\"latency-svc-88vbv\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-88vbv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: cf655be4-47b3-4290-9b67-8711db95e632, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051933d424f0d55, ext:987804980768, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051933d424f0d55, ext:987804980768, loc:(*time.Location)(0x750cdc0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-88vbv.16ad5c4ebc1f5f55\" is forbidden: unable to create new content in namespace svc-latency-6228 because it is being terminated' (will not retry!)\nE1012 18:35:33.566541       1 namespace_controller.go:162] deletion of namespace services-1654 failed: unexpected items still remain in namespace: services-1654 for gvr: /v1, Resource=pods\nE1012 18:35:33.649683       1 tokens_controller.go:262] error synchronizing serviceaccount dns-9572/default: secrets \"default-token-mnj4n\" is forbidden: unable to create new content in namespace dns-9572 because it is being terminated\nE1012 18:35:33.712860       1 tokens_controller.go:262] error synchronizing serviceaccount gc-1970/default: secrets \"default-token-m8k5m\" is forbidden: unable to create new content in namespace gc-1970 because it is being terminated\nI1012 18:35:33.737495       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"svc-latency-6228/latency-svc-qrmvf\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-qrmvf\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-qrmvf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed5eff90-bad9-4638-ae8b-5b5697003357, UID in object meta: \"\nI1012 18:35:33.739162       1 event.go:291] \"Event occurred\" object=\"svc-latency-6228/latency-svc-qrmvf\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-6228/latency-svc-qrmvf: Operation cannot be fulfilled on endpoints \\\"latency-svc-qrmvf\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-qrmvf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed5eff90-bad9-4638-ae8b-5b5697003357, UID in object meta: \"\nI1012 18:35:33.744393       1 namespace_controller.go:185] Namespace has been deleted services-5483\nI1012 18:35:33.892678       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"svc-latency-6228/latency-svc-w56fr\" err=\"Operation cannot be fulfilled on endpoints \\\"latency-svc-w56fr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-w56fr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3a9b7617-1451-49aa-a78e-cb7ac628c396, UID in object meta: \"\nI1012 18:35:33.892992       1 event.go:291] \"Event occurred\" object=\"svc-latency-6228/latency-svc-w56fr\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint svc-latency-6228/latency-svc-w56fr: Operation cannot be fulfilled on endpoints \\\"latency-svc-w56fr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-w56fr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3a9b7617-1451-49aa-a78e-cb7ac628c396, UID in object meta: \"\nE1012 18:35:33.896631       1 namespace_controller.go:162] deletion of namespace services-1654 failed: unexpected items still remain in namespace: services-1654 for gvr: /v1, Resource=pods\nE1012 18:35:34.038153       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-qrmvf.16ad5c4ee5c521a8\", GenerateName:\"\", Namespace:\"svc-latency-6228\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-6228\", Name:\"latency-svc-qrmvf\", UID:\"ed5eff90-bad9-4638-ae8b-5b5697003357\", APIVersion:\"v1\", ResourceVersion:\"31249\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-6228/latency-svc-qrmvf: Operation cannot be fulfilled on endpoints \\\"latency-svc-qrmvf\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-qrmvf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed5eff90-bad9-4638-ae8b-5b5697003357, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051933d6bf4cfa8, ext:988503709810, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051933d6bf4cfa8, ext:988503709810, loc:(*time.Location)(0x750cdc0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-qrmvf.16ad5c4ee5c521a8\" is forbidden: unable to create new content in namespace svc-latency-6228 because it is being terminated' (will not retry!)\nI1012 18:35:34.091397       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4591/webserver-86f449785c\" need=0 deleting=1\nI1012 18:35:34.091579       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4591/webserver-86f449785c\" relatedReplicaSets=[webserver-847dcfb7fb webserver-86f449785c]\nI1012 18:35:34.091752       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-86f449785c\" pod=\"deployment-4591/webserver-86f449785c-dcr8k\"\nI1012 18:35:34.093919       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-86f449785c to 0\"\nI1012 18:35:34.116382       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4591/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:34.118070       1 event.go:291] \"Event occurred\" object=\"deployment-4591/webserver-86f449785c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-86f449785c-dcr8k\"\nE1012 18:35:34.254502       1 namespace_controller.go:162] deletion of namespace services-1654 failed: unexpected items still remain in namespace: services-1654 for gvr: /v1, Resource=pods\nE1012 18:35:34.335151       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-w56fr.16ad5c4eef050680\", GenerateName:\"\", Namespace:\"svc-latency-6228\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"svc-latency-6228\", Name:\"latency-svc-w56fr\", UID:\"3a9b7617-1451-49aa-a78e-cb7ac628c396\", APIVersion:\"v1\", ResourceVersion:\"32095\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint svc-latency-6228/latency-svc-w56fr: Operation cannot be fulfilled on endpoints \\\"latency-svc-w56fr\\\": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/svc-latency-6228/latency-svc-w56fr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3a9b7617-1451-49aa-a78e-cb7ac628c396, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051933d7534b480, ext:988658892105, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051933d7534b480, ext:988658892105, loc:(*time.Location)(0x750cdc0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-w56fr.16ad5c4eef050680\" is forbidden: unable to create new content in namespace svc-latency-6228 because it is being terminated' (will not retry!)\nI1012 18:35:34.429357       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-bgcck to be scheduled\"\nI1012 18:35:34.429902       1 pv_controller.go:930] claim \"provisioning-8118/pvc-f7kzk\" bound to volume \"local-h7hsp\"\nI1012 18:35:34.591136       1 pv_controller.go:879] volume \"local-h7hsp\" entered phase \"Bound\"\nI1012 18:35:34.591173       1 pv_controller.go:982] volume \"local-h7hsp\" bound to claim \"provisioning-8118/pvc-f7kzk\"\nI1012 18:35:34.700487       1 pv_controller.go:823] claim \"provisioning-8118/pvc-f7kzk\" entered phase \"Bound\"\nI1012 18:35:34.700743       1 pv_controller.go:930] claim \"provisioning-4993/pvc-5fthd\" bound to volume \"local-g6hxk\"\nI1012 18:35:34.775460       1 pv_controller.go:879] volume \"local-g6hxk\" entered phase \"Bound\"\nI1012 18:35:34.775566       1 pv_controller.go:982] volume \"local-g6hxk\" bound to claim \"provisioning-4993/pvc-5fthd\"\nI1012 18:35:34.808501       1 pv_controller.go:823] claim \"provisioning-4993/pvc-5fthd\" entered phase \"Bound\"\nI1012 18:35:34.967588       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-3192\\\" or manually created by system administrator\"\nE1012 18:35:35.031834       1 namespace_controller.go:162] deletion of namespace services-1654 failed: unexpected items still remain in namespace: services-1654 for gvr: /v1, Resource=pods\nI1012 18:35:35.153595       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-7d768f864c\" objectUID=02c935c2-3207-4d21-9629-c36490f6e107 kind=\"ControllerRevision\" virtual=false\nI1012 18:35:35.154583       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7473-4745/csi-mockplugin\nI1012 18:35:35.154668       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-0\" objectUID=4cb99501-e3b0-494a-8bcf-3c8d69c6bd49 kind=\"Pod\" virtual=false\nI1012 18:35:35.157591       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-7d768f864c\" objectUID=02c935c2-3207-4d21-9629-c36490f6e107 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:35:35.158167       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-0\" objectUID=4cb99501-e3b0-494a-8bcf-3c8d69c6bd49 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:35.270038       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-attacher-5f7796cc99\" objectUID=0842786b-b8d4-445e-8e43-7d8e4a585c4d kind=\"ControllerRevision\" virtual=false\nI1012 18:35:35.270238       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7473-4745/csi-mockplugin-attacher\nI1012 18:35:35.270423       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-attacher-0\" objectUID=e697c892-00a0-4171-a8a4-98790d97112c kind=\"Pod\" virtual=false\nI1012 18:35:35.291504       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-attacher-5f7796cc99\" objectUID=0842786b-b8d4-445e-8e43-7d8e4a585c4d kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:35:35.292443       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7473-4745/csi-mockplugin-attacher-0\" objectUID=e697c892-00a0-4171-a8a4-98790d97112c kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:35.373399       1 namespace_controller.go:185] Namespace has been deleted hostport-9831\nI1012 18:35:35.493636       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09ca24a10595e130d\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:35.494142       1 event.go:291] \"Event occurred\" object=\"provisioning-6739/pod-subpath-test-dynamicpv-d6bj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\\\" \"\nE1012 18:35:35.624655       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:35:35.626488       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:35:35.802422       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-2473-1840/default: secrets \"default-token-p8hs4\" is forbidden: unable to create new content in namespace volume-expand-2473-1840 because it is being terminated\nI1012 18:35:36.925387       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1012 18:35:37.060843       1 pv_controller.go:879] volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" entered phase \"Bound\"\nI1012 18:35:37.060874       1 pv_controller.go:982] volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" bound to claim \"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:35:37.121956       1 pv_controller.go:823] claim \"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" entered phase \"Bound\"\nE1012 18:35:37.312226       1 namespace_controller.go:162] deletion of namespace apply-7370 failed: unexpected items still remain in namespace: apply-7370 for gvr: /v1, Resource=pods\nI1012 18:35:37.724129       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7473\nI1012 18:35:37.985355       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^315c9144-2b8b-11ec-a3ea-2efa9c825458\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:38.391624       1 aws.go:4736] Deleted all security groups for load balancer: test-rolling-update-with-lb\nI1012 18:35:38.391818       1 controller.go:916] Removing finalizer from service deployment-1974/test-rolling-update-with-lb\nI1012 18:35:38.399597       1 controller.go:942] Patching status for service deployment-1974/test-rolling-update-with-lb\nI1012 18:35:38.399912       1 event.go:291] \"Event occurred\" object=\"deployment-1974/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"DeletedLoadBalancer\" message=\"Deleted load balancer\"\nE1012 18:35:38.402479       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"test-rolling-update-with-lb.16ad5c4ffba749fe\", GenerateName:\"\", Namespace:\"deployment-1974\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"deployment-1974\", Name:\"test-rolling-update-with-lb\", UID:\"89d36149-10c9-4bec-abdd-c9b9f4c8525f\", APIVersion:\"v1\", ResourceVersion:\"29077\", FieldPath:\"\"}, Reason:\"DeletedLoadBalancer\", Message:\"Deleted load balancer\", Source:v1.EventSource{Component:\"service-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051933e97d105fe, ext:993165820106, loc:(*time.Location)(0x750cdc0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051933e97d105fe, ext:993165820106, loc:(*time.Location)(0x750cdc0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"test-rolling-update-with-lb.16ad5c4ffba749fe\" is forbidden: unable to create new content in namespace deployment-1974 because it is being terminated' (will not retry!)\nE1012 18:35:38.417319       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nE1012 18:35:38.424037       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nI1012 18:35:38.518247       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^315c9144-2b8b-11ec-a3ea-2efa9c825458\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:38.518430       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester-bgcck\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\\\" \"\nI1012 18:35:38.771901       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:38.805052       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:38.863863       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI1012 18:35:38.892456       1 namespace_controller.go:185] Namespace has been deleted gc-1970\nI1012 18:35:38.996462       1 namespace_controller.go:185] Namespace has been deleted dns-9572\nI1012 18:35:40.219128       1 namespace_controller.go:185] Namespace has been deleted kubectl-3198\nI1012 18:35:40.339055       1 namespace_controller.go:185] Namespace has been deleted volumemode-5158\nI1012 18:35:40.448510       1 namespace_controller.go:185] Namespace has been deleted services-1654\nE1012 18:35:40.496495       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7473-4745/default: secrets \"default-token-9xrj7\" is forbidden: unable to create new content in namespace csi-mock-volumes-7473-4745 because it is being terminated\nI1012 18:35:40.593486       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:40.593869       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-376/pod-be9c49d1-437a-41ed-a89b-db017d983373\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\\\" \"\nI1012 18:35:40.642540       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-674/affinity-clusterip-timeout\" need=3 creating=3\nI1012 18:35:40.648947       1 event.go:291] \"Event occurred\" object=\"services-674/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-v9dp5\"\nI1012 18:35:40.662225       1 event.go:291] \"Event occurred\" object=\"services-674/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-5k2gq\"\nI1012 18:35:40.662521       1 event.go:291] \"Event occurred\" object=\"services-674/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-cr5tz\"\nI1012 18:35:41.243151       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-6d994d855f\" objectUID=5ba87c2e-dfd6-4ae7-8cfb-1e85dcf39beb kind=\"ControllerRevision\" virtual=false\nI1012 18:35:41.243441       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7792-9792/csi-mockplugin\nI1012 18:35:41.243528       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-0\" objectUID=ef4cc33a-cecc-48d5-94cf-1490f20ef637 kind=\"Pod\" virtual=false\nI1012 18:35:41.262728       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-6d994d855f\" objectUID=5ba87c2e-dfd6-4ae7-8cfb-1e85dcf39beb kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:35:41.262789       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-0\" objectUID=ef4cc33a-cecc-48d5-94cf-1490f20ef637 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:41.347006       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7792-9792/csi-mockplugin-attacher\nI1012 18:35:41.347042       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-attacher-0\" objectUID=8d62fe0f-1eda-4e0f-b134-d506965dda5e kind=\"Pod\" virtual=false\nI1012 18:35:41.347017       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-attacher-5448bd5449\" objectUID=b037de50-38eb-4e64-a7b9-5dcbe83226b9 kind=\"ControllerRevision\" virtual=false\nI1012 18:35:41.348953       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-attacher-0\" objectUID=8d62fe0f-1eda-4e0f-b134-d506965dda5e kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:41.350376       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7792-9792/csi-mockplugin-attacher-5448bd5449\" objectUID=b037de50-38eb-4e64-a7b9-5dcbe83226b9 kind=\"ControllerRevision\" propagationPolicy=Background\nE1012 18:35:41.572196       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:35:42.269031       1 tokens_controller.go:262] error synchronizing serviceaccount ssh-8257/default: secrets \"default-token-bn9kt\" is forbidden: unable to create new content in namespace ssh-8257 because it is being terminated\nI1012 18:35:42.296436       1 namespace_controller.go:185] Namespace has been deleted svc-latency-6228\nI1012 18:35:42.481906       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-7918/pause-pod\"\nI1012 18:35:42.481979       1 garbagecollector.go:471] \"Processing object\" object=\"services-7918/pause-pod-596bd87884\" objectUID=46052ad1-0424-47f9-a818-8477b50b7b19 kind=\"ReplicaSet\" virtual=false\nI1012 18:35:42.483628       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7918/pause-pod-596bd87884\" objectUID=46052ad1-0424-47f9-a818-8477b50b7b19 kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:35:42.486713       1 garbagecollector.go:471] \"Processing object\" object=\"services-7918/pause-pod-596bd87884-zgjhm\" objectUID=b6176d2d-2159-4acb-a026-9670d01332e5 kind=\"Pod\" virtual=false\nI1012 18:35:42.487194       1 garbagecollector.go:471] \"Processing object\" object=\"services-7918/pause-pod-596bd87884-h9gqj\" objectUID=12c7587d-ddc1-42ed-83f5-b7a00ef3fa1e kind=\"Pod\" virtual=false\nI1012 18:35:42.488502       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7918/pause-pod-596bd87884-zgjhm\" objectUID=b6176d2d-2159-4acb-a026-9670d01332e5 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:42.489175       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7918/pause-pod-596bd87884-h9gqj\" objectUID=12c7587d-ddc1-42ed-83f5-b7a00ef3fa1e kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:42.595392       1 garbagecollector.go:471] \"Processing object\" object=\"services-7918/sourceip-test-m6pn2\" objectUID=b432d14a-7cac-4a04-8fda-b1fa9f022e44 kind=\"EndpointSlice\" virtual=false\nI1012 18:35:42.598746       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7918/sourceip-test-m6pn2\" objectUID=b432d14a-7cac-4a04-8fda-b1fa9f022e44 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:35:42.808526       1 namespace_controller.go:185] Namespace has been deleted replicaset-1907\nI1012 18:35:43.491299       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4993/pvc-5fthd\"\nI1012 18:35:43.499843       1 pv_controller.go:640] volume \"local-g6hxk\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:35:43.504138       1 pv_controller.go:879] volume \"local-g6hxk\" entered phase \"Released\"\nI1012 18:35:43.544974       1 pv_controller_base.go:505] deletion of claim \"provisioning-4993/pvc-5fthd\" was already processed\nI1012 18:35:43.864762       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7792\nI1012 18:35:44.084802       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-7577/nodeport-update-service\" need=2 creating=2\nI1012 18:35:44.090369       1 event.go:291] \"Event occurred\" object=\"services-7577/nodeport-update-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-update-service-crvnh\"\nI1012 18:35:44.097766       1 event.go:291] \"Event occurred\" object=\"services-7577/nodeport-update-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-update-service-xfhng\"\nI1012 18:35:44.226039       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-4484/inline-volume-tester-4ns7q\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-0\"\nI1012 18:35:44.226065       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-0\"\nI1012 18:35:44.226165       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-4484/inline-volume-tester-4ns7q\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-1\"\nI1012 18:35:44.226307       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-1\"\nI1012 18:35:44.232614       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-0\"\nI1012 18:35:44.239813       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-4484/inline-volume-tester-4ns7q\" objectUID=9cf7c8e9-e54d-4e07-85bf-91b3a6b43f01 kind=\"Pod\" virtual=false\nI1012 18:35:44.243628       1 pv_controller.go:640] volume \"pvc-9f297920-865a-4e87-8c39-8b13f79fad8a\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:44.244130       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-4484, name: inline-volume-tester-4ns7q-my-volume-1, uid: 95a05d24-7b87-4cfd-b6b4-a67baa186a85] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-4484, name: inline-volume-tester-4ns7q, uid: 9cf7c8e9-e54d-4e07-85bf-91b3a6b43f01] is deletingDependents\nI1012 18:35:44.244452       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-1\" objectUID=95a05d24-7b87-4cfd-b6b4-a67baa186a85 kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:35:44.244417       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-1\"\nI1012 18:35:44.252468       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-4484/inline-volume-tester-4ns7q\" objectUID=9cf7c8e9-e54d-4e07-85bf-91b3a6b43f01 kind=\"Pod\" virtual=false\nI1012 18:35:44.253765       1 pv_controller.go:879] volume \"pvc-9f297920-865a-4e87-8c39-8b13f79fad8a\" entered phase \"Released\"\nI1012 18:35:44.261208       1 pv_controller.go:1340] isVolumeReleased[pvc-9f297920-865a-4e87-8c39-8b13f79fad8a]: volume is released\nI1012 18:35:44.261351       1 pv_controller.go:640] volume \"pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:44.261435       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-4484, name: inline-volume-tester-4ns7q, uid: 9cf7c8e9-e54d-4e07-85bf-91b3a6b43f01]\nI1012 18:35:44.269141       1 pv_controller.go:879] volume \"pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85\" entered phase \"Released\"\nE1012 18:35:44.271290       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:35:44.276214       1 pv_controller_base.go:505] deletion of claim \"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-0\" was already processed\nI1012 18:35:44.278747       1 pv_controller.go:1340] isVolumeReleased[pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85]: volume is released\nI1012 18:35:44.288169       1 pv_controller_base.go:505] deletion of claim \"ephemeral-4484/inline-volume-tester-4ns7q-my-volume-1\" was already processed\nI1012 18:35:45.480527       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE1012 18:35:46.215274       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:35:46.223466       1 namespace_controller.go:185] Namespace has been deleted volume-expand-2473-1840\nI1012 18:35:46.769390       1 namespace_controller.go:185] Namespace has been deleted deployment-1974\nI1012 18:35:47.357736       1 namespace_controller.go:185] Namespace has been deleted ssh-8257\nE1012 18:35:47.765935       1 tokens_controller.go:262] error synchronizing serviceaccount services-7918/default: secrets \"default-token-w466j\" is forbidden: unable to create new content in namespace services-7918 because it is being terminated\nI1012 18:35:48.120400       1 garbagecollector.go:471] \"Processing object\" object=\"services-84/affinity-clusterip-8j6ml\" objectUID=465caf22-3415-41ba-af27-6b9a688726ad kind=\"EndpointSlice\" virtual=false\nI1012 18:35:48.124688       1 garbagecollector.go:580] \"Deleting object\" object=\"services-84/affinity-clusterip-8j6ml\" objectUID=465caf22-3415-41ba-af27-6b9a688726ad kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:35:48.875112       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^166096c5-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:35:48.921574       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-9f297920-865a-4e87-8c39-8b13f79fad8a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^16607065-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:35:48.921900       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^166096c5-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:35:48.947400       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-9f297920-865a-4e87-8c39-8b13f79fad8a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^16607065-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nE1012 18:35:49.119624       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:35:49.120039       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nI1012 18:35:49.192589       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester2-s7klr to be scheduled\"\nI1012 18:35:49.429655       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester2-s7klr to be scheduled\"\nI1012 18:35:49.444602       1 namespace_controller.go:185] Namespace has been deleted cronjob-4720\nI1012 18:35:49.492690       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-95a05d24-7b87-4cfd-b6b4-a67baa186a85\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^166096c5-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:35:49.534176       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-9f297920-865a-4e87-8c39-8b13f79fad8a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-4484^16607065-2b8b-11ec-b758-1601313cdd97\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nE1012 18:35:49.672357       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-1607/pvc-n7s6x: storageclass.storage.k8s.io \"provisioning-1607\" not found\nI1012 18:35:49.672962       1 event.go:291] \"Event occurred\" object=\"provisioning-1607/pvc-n7s6x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1607\\\" not found\"\nI1012 18:35:49.730569       1 pv_controller.go:879] volume \"local-d46nt\" entered phase \"Available\"\nE1012 18:35:50.487076       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4993/default: secrets \"default-token-mq8c4\" is forbidden: unable to create new content in namespace provisioning-4993 because it is being terminated\nE1012 18:35:50.926154       1 tokens_controller.go:262] error synchronizing serviceaccount pv-2004/default: secrets \"default-token-wzmkf\" is forbidden: unable to create new content in namespace pv-2004 because it is being terminated\nI1012 18:35:50.960501       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-3192\\\" or manually created by system administrator\"\nI1012 18:35:50.981162       1 pv_controller.go:879] volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" entered phase \"Bound\"\nI1012 18:35:50.981996       1 pv_controller.go:982] volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" bound to claim \"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nI1012 18:35:50.992181       1 pv_controller.go:823] claim \"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" entered phase \"Bound\"\nE1012 18:35:51.148516       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-4484/default: secrets \"default-token-fskfv\" is forbidden: unable to create new content in namespace ephemeral-4484 because it is being terminated\nI1012 18:35:51.551283       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7792-9792\nI1012 18:35:51.966361       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^39ad1ade-2b8b-11ec-a3ea-2efa9c825458\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:52.532100       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^39ad1ade-2b8b-11ec-a3ea-2efa9c825458\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:52.532493       1 event.go:291] \"Event occurred\" object=\"ephemeral-3192/inline-volume-tester2-s7klr\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\\\" \"\nI1012 18:35:52.933776       1 namespace_controller.go:185] Namespace has been deleted services-7918\nI1012 18:35:53.986954       1 namespace_controller.go:185] Namespace has been deleted secrets-9752\nI1012 18:35:54.045257       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-4484-8560/csi-hostpathplugin\nI1012 18:35:54.045519       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-4484-8560/csi-hostpathplugin-778c797655\" objectUID=602d4bfe-e4ac-44a3-901c-4100274fa3bb kind=\"ControllerRevision\" virtual=false\nI1012 18:35:54.045519       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-4484-8560/csi-hostpathplugin-0\" objectUID=e3fe5736-52c8-411f-bed8-32f0a2fb8b82 kind=\"Pod\" virtual=false\nI1012 18:35:54.047907       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-4484-8560/csi-hostpathplugin-778c797655\" objectUID=602d4bfe-e4ac-44a3-901c-4100274fa3bb kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:35:54.048380       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-4484-8560/csi-hostpathplugin-0\" objectUID=e3fe5736-52c8-411f-bed8-32f0a2fb8b82 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:54.296528       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6739/aws2455t\"\nI1012 18:35:54.301917       1 pv_controller.go:640] volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:54.304760       1 pv_controller.go:879] volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" entered phase \"Released\"\nI1012 18:35:54.307459       1 pv_controller.go:1340] isVolumeReleased[pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b]: volume is released\nI1012 18:35:55.287908       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:35:55.288226       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI1012 18:35:55.309843       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI1012 18:35:55.321107       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:35:55.612867       1 namespace_controller.go:185] Namespace has been deleted provisioning-4993\nI1012 18:35:55.728615       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7473-4745\nI1012 18:35:55.866227       1 event.go:291] \"Event occurred\" object=\"webhook-509/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1012 18:35:55.871156       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-509/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1012 18:35:55.875511       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-509/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:55.882799       1 event.go:291] \"Event occurred\" object=\"webhook-509/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-7zvj2\"\nI1012 18:35:55.883106       1 event.go:291] \"Event occurred\" object=\"volume-expand-3060-8025/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:35:56.011324       1 namespace_controller.go:185] Namespace has been deleted pv-2004\nI1012 18:35:56.019311       1 event.go:291] \"Event occurred\" object=\"volume-expand-3060/csi-hostpathhmzs7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-3060\\\" or manually created by system administrator\"\nI1012 18:35:56.111438       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e5f36049-eb20-42d0-89db-f348969848f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a95cc717d65e153f\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:56.116101       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e5f36049-eb20-42d0-89db-f348969848f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a95cc717d65e153f\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:56.196989       1 namespace_controller.go:185] Namespace has been deleted ephemeral-4484\nI1012 18:35:56.753092       1 event.go:291] \"Event occurred\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-697cdbd8f4 to 1\"\nI1012 18:35:56.753685       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4\" need=1 creating=1\nI1012 18:35:56.763834       1 event.go:291] \"Event occurred\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-697cdbd8f4-bdwhc\"\nI1012 18:35:56.768415       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:35:56.974778       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-6/awsnzfdm\"\nI1012 18:35:56.983362       1 pv_controller.go:640] volume \"pvc-e5f36049-eb20-42d0-89db-f348969848f9\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:56.986625       1 pv_controller.go:879] volume \"pvc-e5f36049-eb20-42d0-89db-f348969848f9\" entered phase \"Released\"\nI1012 18:35:56.988950       1 pv_controller.go:1340] isVolumeReleased[pvc-e5f36049-eb20-42d0-89db-f348969848f9]: volume is released\nI1012 18:35:57.331178       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5746^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:57.347639       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5746^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:57.564402       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1012 18:35:57.774787       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-5746/pvc-4x76d\"\nI1012 18:35:57.780817       1 pv_controller.go:640] volume \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:35:57.784402       1 pv_controller.go:879] volume \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\" entered phase \"Released\"\nI1012 18:35:57.786617       1 pv_controller.go:1340] isVolumeReleased[pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1]: volume is released\nE1012 18:35:57.794815       1 pv_protection_controller.go:118] PV pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\": the object has been modified; please apply your changes to the latest version and try again\nI1012 18:35:57.797875       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-5746/pvc-4x76d\" was already processed\nI1012 18:35:57.928358       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ec983b34-3e78-4f43-a190-d5487bb6c4e1\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5746^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:35:57.973945       1 pv_controller.go:879] volume \"pvc-38ad371e-5804-400c-a7db-b95a0c912075\" entered phase \"Bound\"\nI1012 18:35:57.973983       1 pv_controller.go:982] volume \"pvc-38ad371e-5804-400c-a7db-b95a0c912075\" bound to claim \"volume-expand-3060/csi-hostpathhmzs7\"\nI1012 18:35:57.982076       1 pv_controller.go:823] claim \"volume-expand-3060/csi-hostpathhmzs7\" entered phase \"Bound\"\nI1012 18:35:58.128364       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-rtjvv\" objectUID=5cd50ae0-1966-4e03-8ab8-e5448fe61550 kind=\"Pod\" virtual=false\nI1012 18:35:58.129577       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-4gvs4\" objectUID=fcedc290-427b-4564-87be-1afbb02d7bfb kind=\"Pod\" virtual=false\nI1012 18:35:58.130858       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-d6qvr\" objectUID=91aaf4ed-3688-4ddb-a59a-9e9ff9e1ecdc kind=\"Pod\" virtual=false\nI1012 18:35:58.131519       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-4rdls\" objectUID=23891614-bdb1-42e6-8542-f942bfd4d0b0 kind=\"Pod\" virtual=false\nI1012 18:35:58.131867       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-l4gnt\" objectUID=9bc4f03c-b3a7-4371-b06b-0b25d7afed2d kind=\"Pod\" virtual=false\nI1012 18:35:58.132178       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-m6hnn\" objectUID=407f47b9-4e2b-4459-b015-f15fb63faf2f kind=\"Pod\" virtual=false\nI1012 18:35:58.132495       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-h75m6\" objectUID=aa1ed71c-96f0-4929-95db-e6c2e713830a kind=\"Pod\" virtual=false\nI1012 18:35:58.132821       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4591/webserver-847dcfb7fb-nt6jt\" objectUID=605d4bf4-059c-48da-aca8-7c7d9429dd11 kind=\"Pod\" virtual=false\nI1012 18:35:58.134916       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-rtjvv\" objectUID=5cd50ae0-1966-4e03-8ab8-e5448fe61550 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.143260       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-h75m6\" objectUID=aa1ed71c-96f0-4929-95db-e6c2e713830a kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.143654       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-d6qvr\" objectUID=91aaf4ed-3688-4ddb-a59a-9e9ff9e1ecdc kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.144516       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-m6hnn\" objectUID=407f47b9-4e2b-4459-b015-f15fb63faf2f kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.144835       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-l4gnt\" objectUID=9bc4f03c-b3a7-4371-b06b-0b25d7afed2d kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.145412       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-nt6jt\" objectUID=605d4bf4-059c-48da-aca8-7c7d9429dd11 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.146705       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-4gvs4\" objectUID=fcedc290-427b-4564-87be-1afbb02d7bfb kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.146996       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4591/webserver-847dcfb7fb-4rdls\" objectUID=23891614-bdb1-42e6-8542-f942bfd4d0b0 kind=\"Pod\" propagationPolicy=Background\nI1012 18:35:58.409789       1 namespace_controller.go:185] Namespace has been deleted services-84\nI1012 18:35:58.635301       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8118/pvc-f7kzk\"\nI1012 18:35:58.643155       1 pv_controller.go:640] volume \"local-h7hsp\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:35:58.652052       1 pv_controller.go:879] volume \"local-h7hsp\" entered phase \"Released\"\nI1012 18:35:58.756588       1 pv_controller.go:879] volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" entered phase \"Bound\"\nI1012 18:35:58.757190       1 pv_controller.go:982] volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" bound to claim \"statefulset-3442/datadir-ss-1\"\nI1012 18:35:58.765723       1 pv_controller.go:823] claim \"statefulset-3442/datadir-ss-1\" entered phase \"Bound\"\nI1012 18:35:58.766236       1 pv_controller_base.go:505] deletion of claim \"provisioning-8118/pvc-f7kzk\" was already processed\nI1012 18:35:59.000285       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-1858/sample-webhook-deployment\"\nI1012 18:35:59.361441       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7080823ea5ea33\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:35:59.391643       1 pv_controller.go:879] volume \"local-pvdlknv\" entered phase \"Available\"\nI1012 18:35:59.440593       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3336/pvc-k92ct\" bound to volume \"local-pvdlknv\"\nI1012 18:35:59.487120       1 pv_controller.go:879] volume \"local-pvdlknv\" entered phase \"Bound\"\nI1012 18:35:59.487818       1 pv_controller.go:982] volume \"local-pvdlknv\" bound to claim \"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:35:59.517196       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3336/pvc-k92ct\" entered phase \"Bound\"\nI1012 18:35:59.549143       1 pv_controller.go:879] volume \"local-pv4xpjf\" entered phase \"Available\"\nI1012 18:35:59.601023       1 pv_controller.go:930] claim \"persistent-local-volumes-test-5205/pvc-26l8q\" bound to volume \"local-pv4xpjf\"\nI1012 18:35:59.614454       1 pv_controller.go:879] volume \"local-pv4xpjf\" entered phase \"Bound\"\nI1012 18:35:59.614642       1 pv_controller.go:982] volume \"local-pv4xpjf\" bound to claim \"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:35:59.624953       1 pv_controller.go:823] claim \"persistent-local-volumes-test-5205/pvc-26l8q\" entered phase \"Bound\"\nI1012 18:35:59.866304       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-4591/webserver\"\nE1012 18:35:59.931743       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-4591/default: serviceaccounts \"default\" not found\nI1012 18:36:00.000088       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-9864/sample-webhook-deployment\"\nE1012 18:36:00.560353       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4771/default: secrets \"default-token-4hh86\" is forbidden: unable to create new content in namespace kubectl-4771 because it is being terminated\nI1012 18:36:00.608469       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE1012 18:36:01.042611       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2713/default: secrets \"default-token-chn6b\" is forbidden: unable to create new content in namespace provisioning-2713 because it is being terminated\nE1012 18:36:01.400390       1 tokens_controller.go:262] error synchronizing serviceaccount projected-9462/default: secrets \"default-token-bvnbk\" is forbidden: unable to create new content in namespace projected-9462 because it is being terminated\nI1012 18:36:01.735571       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7080823ea5ea33\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:36:01.735754       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\\\" \"\nI1012 18:36:03.155941       1 pv_controller.go:1340] isVolumeReleased[pvc-e5f36049-eb20-42d0-89db-f348969848f9]: volume is released\nI1012 18:36:03.538206       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09ca24a10595e130d\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:03.540612       1 pv_controller_base.go:505] deletion of claim \"volume-6/awsnzfdm\" was already processed\nI1012 18:36:03.544233       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09ca24a10595e130d\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:03.930397       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e5f36049-eb20-42d0-89db-f348969848f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a95cc717d65e153f\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:04.429678       1 pv_controller.go:930] claim \"provisioning-1607/pvc-n7s6x\" bound to volume \"local-d46nt\"\nI1012 18:36:04.437948       1 pv_controller.go:1340] isVolumeReleased[pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b]: volume is released\nI1012 18:36:04.445115       1 pv_controller.go:879] volume \"local-d46nt\" entered phase \"Bound\"\nI1012 18:36:04.445148       1 pv_controller.go:982] volume \"local-d46nt\" bound to claim \"provisioning-1607/pvc-n7s6x\"\nI1012 18:36:04.457037       1 pv_controller.go:823] claim \"provisioning-1607/pvc-n7s6x\" entered phase \"Bound\"\nI1012 18:36:04.716892       1 namespace_controller.go:185] Namespace has been deleted subpath-9842\nE1012 18:36:04.721968       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8118/default: secrets \"default-token-7x8fj\" is forbidden: unable to create new content in namespace provisioning-8118 because it is being terminated\nI1012 18:36:04.842990       1 namespace_controller.go:185] Namespace has been deleted ephemeral-4484-8560\nI1012 18:36:04.884993       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI1012 18:36:04.988273       1 namespace_controller.go:185] Namespace has been deleted deployment-4591\nI1012 18:36:05.000516       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-4591/webserver\"\nI1012 18:36:05.035308       1 namespace_controller.go:185] Namespace has been deleted watch-5719\nE1012 18:36:05.060051       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2621/pvc-nzh8z: storageclass.storage.k8s.io \"provisioning-2621\" not found\nI1012 18:36:05.060318       1 event.go:291] \"Event occurred\" object=\"provisioning-2621/pvc-nzh8z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2621\\\" not found\"\nI1012 18:36:05.117362       1 pv_controller.go:879] volume \"local-ff7tc\" entered phase \"Available\"\nI1012 18:36:05.129901       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-1582/e2e-test-crd-conversion-webhook-g6hq2\" objectUID=6bf47db6-ca56-4e6c-88a8-dbb76c9dfe21 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:05.135084       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-1582/e2e-test-crd-conversion-webhook-g6hq2\" objectUID=6bf47db6-ca56-4e6c-88a8-dbb76c9dfe21 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:05.203597       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=c0a9ca98-a788-4969-9915-9b53ae0b28f3 kind=\"ReplicaSet\" virtual=false\nI1012 18:36:05.203900       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment\"\nE1012 18:36:05.205829       1 tokens_controller.go:262] error synchronizing serviceaccount pv-3500/default: secrets \"default-token-zksvq\" is forbidden: unable to create new content in namespace pv-3500 because it is being terminated\nI1012 18:36:05.207702       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=c0a9ca98-a788-4969-9915-9b53ae0b28f3 kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:36:05.210788       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4-bdwhc\" objectUID=57bea633-b39e-4bea-9372-772a0809fc7a kind=\"Pod\" virtual=false\nI1012 18:36:05.212845       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-1582/sample-crd-conversion-webhook-deployment-697cdbd8f4-bdwhc\" objectUID=57bea633-b39e-4bea-9372-772a0809fc7a kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:05.517849       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-6ffcbb66db\" objectUID=71f43b90-96d8-4974-8288-1d3ad9fc446b kind=\"ControllerRevision\" virtual=false\nI1012 18:36:05.517962       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5746-85/csi-mockplugin\nI1012 18:36:05.518093       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-0\" objectUID=9e197338-a155-4781-8a75-cb5ea241a362 kind=\"Pod\" virtual=false\nI1012 18:36:05.521731       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-0\" objectUID=9e197338-a155-4781-8a75-cb5ea241a362 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:05.521915       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-6ffcbb66db\" objectUID=71f43b90-96d8-4974-8288-1d3ad9fc446b kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:05.559570       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:05.563626       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:05.629663       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-attacher-86777dbc74\" objectUID=9d141f22-4c82-4f93-9f17-04cb155473fa kind=\"ControllerRevision\" virtual=false\nI1012 18:36:05.629951       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5746-85/csi-mockplugin-attacher\nI1012 18:36:05.630000       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-attacher-0\" objectUID=71e6a199-8c11-456c-9465-46fe41f53e61 kind=\"Pod\" virtual=false\nI1012 18:36:05.631826       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-attacher-0\" objectUID=71e6a199-8c11-456c-9465-46fe41f53e61 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:05.632858       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5746-85/csi-mockplugin-attacher-86777dbc74\" objectUID=9d141f22-4c82-4f93-9f17-04cb155473fa kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:05.640428       1 namespace_controller.go:185] Namespace has been deleted kubectl-4771\nI1012 18:36:06.168357       1 namespace_controller.go:185] Namespace has been deleted provisioning-2713\nI1012 18:36:06.434196       1 namespace_controller.go:185] Namespace has been deleted projected-9462\nI1012 18:36:06.796743       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-376/aws5w5hp\"\nI1012 18:36:06.804866       1 pv_controller.go:640] volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:06.807955       1 pv_controller.go:879] volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" entered phase \"Released\"\nI1012 18:36:06.809831       1 pv_controller.go:1340] isVolumeReleased[pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7]: volume is released\nI1012 18:36:07.769884       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:07.776129       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:07.776927       1 event.go:291] \"Event occurred\" object=\"job-9706/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit--1-qnfsw\"\nI1012 18:36:07.781451       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:07.788698       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:08.161663       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5746\nE1012 18:36:09.408349       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-3220/default: secrets \"default-token-n9x2t\" is forbidden: unable to create new content in namespace svcaccounts-3220 because it is being terminated\nI1012 18:36:09.479579       1 namespace_controller.go:185] Namespace has been deleted pods-7528\nE1012 18:36:09.750357       1 namespace_controller.go:162] deletion of namespace pods-4841 failed: unexpected items still remain in namespace: pods-4841 for gvr: /v1, Resource=pods\nI1012 18:36:09.756165       1 namespace_controller.go:185] Namespace has been deleted provisioning-8118\nE1012 18:36:09.771444       1 namespace_controller.go:162] deletion of namespace svcaccounts-6418 failed: unexpected items still remain in namespace: svcaccounts-6418 for gvr: /v1, Resource=pods\nE1012 18:36:10.149604       1 tokens_controller.go:262] error synchronizing serviceaccount crd-webhook-1582/default: secrets \"default-token-rw8tr\" is forbidden: unable to create new content in namespace crd-webhook-1582 because it is being terminated\nI1012 18:36:10.328371       1 namespace_controller.go:185] Namespace has been deleted pv-3500\nI1012 18:36:10.459149       1 event.go:291] \"Event occurred\" object=\"provisioning-574-6125/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:36:10.615662       1 event.go:291] \"Event occurred\" object=\"provisioning-574/csi-hostpathq4d4n\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-574\\\" or manually created by system administrator\"\nE1012 18:36:12.413152       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:12.669469       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:12.673930       1 event.go:291] \"Event occurred\" object=\"job-9706/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit--1-2z7p2\"\nI1012 18:36:12.674382       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:12.678955       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:12.684115       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nE1012 18:36:12.684467       1 job_controller.go:441] Error syncing job: failed pod(s) detected for job key \"job-9706/backofflimit\"\nE1012 18:36:13.611378       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:13.705525       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nW1012 18:36:13.730264       1 reconciler.go:335] Multi-Attach error for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-37-53.us-west-1.compute.internal and can't be attached to another\nI1012 18:36:13.730391       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nE1012 18:36:13.759984       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:14.153098       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-509/e2e-test-webhook-66b2g\" objectUID=e99898ef-4f0b-4f23-9a3a-ff21f6bc4666 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:14.185735       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-509/e2e-test-webhook-66b2g\" objectUID=e99898ef-4f0b-4f23-9a3a-ff21f6bc4666 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:14.211986       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-509/sample-webhook-deployment-78988fc6cd\" objectUID=675354b0-c211-49e6-b499-11cc0a5bf4d8 kind=\"ReplicaSet\" virtual=false\nI1012 18:36:14.212013       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-509/sample-webhook-deployment\"\nI1012 18:36:14.214173       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-509/sample-webhook-deployment-78988fc6cd\" objectUID=675354b0-c211-49e6-b499-11cc0a5bf4d8 kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:36:14.218248       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-509/sample-webhook-deployment-78988fc6cd-7zvj2\" objectUID=0cad0958-d5c8-44ce-b718-a5e8d7f4af70 kind=\"Pod\" virtual=false\nI1012 18:36:14.221691       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-509/sample-webhook-deployment-78988fc6cd-7zvj2\" objectUID=0cad0958-d5c8-44ce-b718-a5e8d7f4af70 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:14.508054       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-3220\nI1012 18:36:15.087278       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1607/pvc-n7s6x\"\nI1012 18:36:15.097055       1 pv_controller.go:640] volume \"local-d46nt\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:15.100203       1 pv_controller.go:879] volume \"local-d46nt\" entered phase \"Released\"\nI1012 18:36:15.141024       1 pv_controller_base.go:505] deletion of claim \"provisioning-1607/pvc-n7s6x\" was already processed\nI1012 18:36:15.197652       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-1582\nE1012 18:36:15.530890       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:16.069434       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:16.198586       1 pv_controller.go:1340] isVolumeReleased[pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b]: volume is released\nI1012 18:36:16.343117       1 pv_controller_base.go:505] deletion of claim \"provisioning-6739/aws2455t\" was already processed\nI1012 18:36:16.623272       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ef2cc40a-355d-46b5-ada7-9ba8be59289b\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09ca24a10595e130d\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nE1012 18:36:17.178234       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-3908/default: secrets \"default-token-m4wbz\" is forbidden: unable to create new content in namespace kubectl-3908 because it is being terminated\nI1012 18:36:17.446373       1 namespace_controller.go:185] Namespace has been deleted volume-6\nI1012 18:36:17.455180       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:17.469393       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-3509\nI1012 18:36:17.838662       1 pv_controller.go:879] volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" entered phase \"Bound\"\nI1012 18:36:17.838857       1 pv_controller.go:982] volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" bound to claim \"provisioning-574/csi-hostpathq4d4n\"\nI1012 18:36:17.847561       1 pv_controller.go:823] claim \"provisioning-574/csi-hostpathq4d4n\" entered phase \"Bound\"\nI1012 18:36:17.872311       1 namespace_controller.go:185] Namespace has been deleted provisioning-3885\nI1012 18:36:18.007049       1 stateful_set_control.go:521] StatefulSet statefulset-1322/ss2 terminating Pod ss2-2 for scale down\nI1012 18:36:18.013317       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI1012 18:36:18.237344       1 pv_controller.go:1340] isVolumeReleased[pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7]: volume is released\nI1012 18:36:18.303328       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-fe4a3cf6-298b-423b-95c2-034dc55cf24d\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:18.303516       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:18.343332       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslice-5183/example-empty-selector-dhzbw\" objectUID=fc500f06-2d72-43e1-9772-ce6a8f4ed846 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:18.347495       1 garbagecollector.go:580] \"Deleting object\" object=\"endpointslice-5183/example-empty-selector-dhzbw\" objectUID=fc500f06-2d72-43e1-9772-ce6a8f4ed846 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:18.360530       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester2-s7klr, uid: a942c22b-9ed5-4580-a125-9c274cc94035] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:36:18.360787       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" objectUID=ebec9b1f-8a87-4cd5-af8e-a2253883cc37 kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:36:18.361137       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester2-s7klr\" objectUID=a942c22b-9ed5-4580-a125-9c274cc94035 kind=\"Pod\" virtual=false\nI1012 18:36:18.364106       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-3192, name: inline-volume-tester2-s7klr-my-volume-0, uid: ebec9b1f-8a87-4cd5-af8e-a2253883cc37] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester2-s7klr, uid: a942c22b-9ed5-4580-a125-9c274cc94035] is deletingDependents\nI1012 18:36:18.366830       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" objectUID=ebec9b1f-8a87-4cd5-af8e-a2253883cc37 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI1012 18:36:18.375107       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" objectUID=ebec9b1f-8a87-4cd5-af8e-a2253883cc37 kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:36:18.375336       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-3192/inline-volume-tester2-s7klr\" PVC=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nI1012 18:36:18.375368       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nI1012 18:36:18.377398       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-376/aws5w5hp\" was already processed\nI1012 18:36:18.569624       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ec5eba8d-3bd9-469b-bd05-9c29f5bfa4a7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-086f04460a3abd69a\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:18.873534       1 event.go:291] \"Event occurred\" object=\"volume-expand-9029-2193/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:36:19.029386       1 event.go:291] \"Event occurred\" object=\"volume-expand-9029/csi-hostpathjsnk6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-9029\\\" or manually created by system administrator\"\nI1012 18:36:19.094295       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-574^49ae8e6e-2b8b-11ec-89e6-d6f7eef15c96\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:36:19.205902       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-509-markers/default: secrets \"default-token-bsxn5\" is forbidden: unable to create new content in namespace webhook-509-markers because it is being terminated\nE1012 18:36:19.334323       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:19.430540       1 pv_controller.go:930] claim \"provisioning-2621/pvc-nzh8z\" bound to volume \"local-ff7tc\"\nI1012 18:36:19.430991       1 event.go:291] \"Event occurred\" object=\"volume-expand-9029/csi-hostpathjsnk6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-9029\\\" or manually created by system administrator\"\nI1012 18:36:19.445507       1 pv_controller.go:879] volume \"local-ff7tc\" entered phase \"Bound\"\nI1012 18:36:19.445794       1 pv_controller.go:982] volume \"local-ff7tc\" bound to claim \"provisioning-2621/pvc-nzh8z\"\nI1012 18:36:19.461934       1 pv_controller.go:823] claim \"provisioning-2621/pvc-nzh8z\" entered phase \"Bound\"\nE1012 18:36:19.600594       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW1012 18:36:19.660003       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-1322/test\", retrying. Error: EndpointSlice informer cache is out of date\nI1012 18:36:19.661243       1 stateful_set_control.go:521] StatefulSet statefulset-1322/ss2 terminating Pod ss2-1 for scale down\nI1012 18:36:19.664903       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI1012 18:36:19.668375       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-574^49ae8e6e-2b8b-11ec-89e6-d6f7eef15c96\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:36:19.668858       1 event.go:291] \"Event occurred\" object=\"provisioning-574/pod-subpath-test-dynamicpv-h5lh\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\\\" \"\nI1012 18:36:19.699182       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-5397/simpletest.deployment-759c5f5647\" need=2 creating=2\nI1012 18:36:19.700374       1 event.go:291] \"Event occurred\" object=\"gc-5397/simpletest.deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set simpletest.deployment-759c5f5647 to 2\"\nI1012 18:36:19.712150       1 event.go:291] \"Event occurred\" object=\"gc-5397/simpletest.deployment-759c5f5647\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-759c5f5647-br58t\"\nI1012 18:36:19.735701       1 event.go:291] \"Event occurred\" object=\"gc-5397/simpletest.deployment-759c5f5647\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-759c5f5647-qzt22\"\nI1012 18:36:19.738683       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"gc-5397/simpletest.deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"simpletest.deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:36:19.807664       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5397/simpletest.deployment-759c5f5647\" objectUID=7f62ac3e-11a2-422b-ae48-0d14a0ea514c kind=\"ReplicaSet\" virtual=false\nI1012 18:36:19.807939       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"gc-5397/simpletest.deployment\"\nI1012 18:36:19.809334       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-5397/simpletest.deployment-759c5f5647\" objectUID=7f62ac3e-11a2-422b-ae48-0d14a0ea514c kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:36:19.812073       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5397/simpletest.deployment-759c5f5647-br58t\" objectUID=0fd3244c-efbb-4b93-aed2-7897380029a4 kind=\"Pod\" virtual=false\nI1012 18:36:19.812462       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5397/simpletest.deployment-759c5f5647-qzt22\" objectUID=d3e3d422-de0a-4461-8de2-8069c17a55e8 kind=\"Pod\" virtual=false\nI1012 18:36:19.815884       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-5397/simpletest.deployment-759c5f5647-qzt22\" objectUID=d3e3d422-de0a-4461-8de2-8069c17a55e8 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:19.816012       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-5397/simpletest.deployment-759c5f5647-br58t\" objectUID=0fd3244c-efbb-4b93-aed2-7897380029a4 kind=\"Pod\" propagationPolicy=Background\nE1012 18:36:20.075259       1 tokens_controller.go:262] error synchronizing serviceaccount prestop-9283/default: secrets \"default-token-kn5fq\" is forbidden: unable to create new content in namespace prestop-9283 because it is being terminated\nI1012 18:36:20.479736       1 stateful_set_control.go:521] StatefulSet statefulset-1322/ss2 terminating Pod ss2-0 for scale down\nI1012 18:36:20.490362       1 event.go:291] \"Event occurred\" object=\"statefulset-1322/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI1012 18:36:20.630962       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:20.631734       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:20.653045       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-fe4a3cf6-298b-423b-95c2-034dc55cf24d\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:20.653126       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:21.056082       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-fe4a3cf6-298b-423b-95c2-034dc55cf24d\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:21.056106       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:21.063067       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-d7239a98-50d9-4a87-8fcd-4081ca58b0ba\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:21.063091       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:21.160509       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5746-85\nE1012 18:36:21.193674       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-3440/default: secrets \"default-token-68vb9\" is forbidden: unable to create new content in namespace volumemode-3440 because it is being terminated\nI1012 18:36:21.290189       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5934-9147/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:36:22.053790       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-d7239a98-50d9-4a87-8fcd-4081ca58b0ba\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:22.053913       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:22.315637       1 namespace_controller.go:185] Namespace has been deleted kubectl-3908\nI1012 18:36:22.454878       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3336/pod-d7239a98-50d9-4a87-8fcd-4081ca58b0ba\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:22.455071       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:22.463567       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3336/pvc-k92ct\"\nI1012 18:36:22.476419       1 pv_controller.go:640] volume \"local-pvdlknv\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:22.482023       1 pv_controller.go:879] volume \"local-pvdlknv\" entered phase \"Released\"\nI1012 18:36:22.484408       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-3336/pvc-k92ct\" was already processed\nI1012 18:36:22.685801       1 event.go:291] \"Event occurred\" object=\"job-9706/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI1012 18:36:22.691185       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nE1012 18:36:22.739854       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-953/pvc-hzgbd: storageclass.storage.k8s.io \"volume-953\" not found\nI1012 18:36:22.740129       1 event.go:291] \"Event occurred\" object=\"volume-953/pvc-hzgbd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-953\\\" not found\"\nI1012 18:36:22.796238       1 pv_controller.go:879] volume \"pvc-62cc9407-d61d-49af-baca-4f66937123b5\" entered phase \"Bound\"\nI1012 18:36:22.796555       1 pv_controller.go:982] volume \"pvc-62cc9407-d61d-49af-baca-4f66937123b5\" bound to claim \"volume-expand-9029/csi-hostpathjsnk6\"\nI1012 18:36:22.799606       1 pv_controller.go:879] volume \"local-7lvpd\" entered phase \"Available\"\nI1012 18:36:22.806127       1 pv_controller.go:823] claim \"volume-expand-9029/csi-hostpathjsnk6\" entered phase \"Bound\"\nE1012 18:36:23.578213       1 tokens_controller.go:262] error synchronizing serviceaccount endpointslice-5183/default: secrets \"default-token-76rdv\" is forbidden: unable to create new content in namespace endpointslice-5183 because it is being terminated\nI1012 18:36:23.584903       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:23.589254       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:24.277272       1 namespace_controller.go:185] Namespace has been deleted webhook-509-markers\nI1012 18:36:24.316805       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.316833       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.359930       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-451/up-down-1\" need=3 creating=3\nI1012 18:36:24.370473       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-1\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-1-qgcz4\"\nI1012 18:36:24.389665       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-1\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-1-wnpdx\"\nI1012 18:36:24.395484       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-1\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-1-l87cl\"\nI1012 18:36:24.548227       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.548305       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.555995       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.556020       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:24.583992       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7892-6524/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:36:24.671018       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7892-6524/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE1012 18:36:24.965109       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6739/default: secrets \"default-token-cxng2\" is forbidden: unable to create new content in namespace provisioning-6739 because it is being terminated\nE1012 18:36:25.150109       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-9572/pvc-7rp58: storageclass.storage.k8s.io \"provisioning-9572\" not found\nI1012 18:36:25.151090       1 event.go:291] \"Event occurred\" object=\"provisioning-9572/pvc-7rp58\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9572\\\" not found\"\nI1012 18:36:25.213386       1 pv_controller.go:879] volume \"local-spjnr\" entered phase \"Available\"\nI1012 18:36:25.507782       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:25.507812       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:25.837381       1 pv_controller.go:879] volume \"hostpath-x9cmx\" entered phase \"Available\"\nI1012 18:36:25.914781       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5205/pod-e61dab01-2028-4b7e-81ca-c2f4115bab3b\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:25.915167       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:25.923002       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-5205/pvc-26l8q\"\nI1012 18:36:25.935906       1 pv_controller.go:640] volume \"local-pv4xpjf\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:25.943668       1 pv_controller.go:879] volume \"local-pv4xpjf\" entered phase \"Released\"\nI1012 18:36:25.963322       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-5205/pvc-26l8q\" was already processed\nI1012 18:36:25.968313       1 namespace_controller.go:185] Namespace has been deleted endpointslice-5656\nI1012 18:36:25.987450       1 namespace_controller.go:185] Namespace has been deleted provisioning-1607\nI1012 18:36:26.000420       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-6299/httpd-deployment\"\nE1012 18:36:26.028505       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-5374/pvc-pzkkp: storageclass.storage.k8s.io \"provisioning-5374\" not found\nI1012 18:36:26.028994       1 event.go:291] \"Event occurred\" object=\"provisioning-5374/pvc-pzkkp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5374\\\" not found\"\nI1012 18:36:26.105005       1 pv_controller.go:879] volume \"local-np7b5\" entered phase \"Available\"\nI1012 18:36:26.284377       1 namespace_controller.go:185] Namespace has been deleted volumemode-3440\nI1012 18:36:26.882291       1 garbagecollector.go:471] \"Processing object\" object=\"services-674/affinity-clusterip-timeout-v9dp5\" objectUID=1fb608f7-03c4-45cd-b560-889696084508 kind=\"Pod\" virtual=false\nI1012 18:36:26.882713       1 garbagecollector.go:471] \"Processing object\" object=\"services-674/affinity-clusterip-timeout-cr5tz\" objectUID=b70e4565-b4cb-4ac3-ba8e-032f7fbc28e5 kind=\"Pod\" virtual=false\nI1012 18:36:26.882760       1 garbagecollector.go:471] \"Processing object\" object=\"services-674/affinity-clusterip-timeout-5k2gq\" objectUID=1a81a67b-7b59-4c65-8a08-fc5899eb7ae9 kind=\"Pod\" virtual=false\nI1012 18:36:26.885790       1 garbagecollector.go:580] \"Deleting object\" object=\"services-674/affinity-clusterip-timeout-5k2gq\" objectUID=1a81a67b-7b59-4c65-8a08-fc5899eb7ae9 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:26.885979       1 garbagecollector.go:580] \"Deleting object\" object=\"services-674/affinity-clusterip-timeout-v9dp5\" objectUID=1fb608f7-03c4-45cd-b560-889696084508 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:26.886231       1 garbagecollector.go:580] \"Deleting object\" object=\"services-674/affinity-clusterip-timeout-cr5tz\" objectUID=b70e4565-b4cb-4ac3-ba8e-032f7fbc28e5 kind=\"Pod\" propagationPolicy=Background\nW1012 18:36:26.901575       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-674/affinity-clusterip-timeout\", retrying. Error: EndpointSlice informer cache is out of date\nI1012 18:36:27.213850       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-2178/externalsvc\" need=2 creating=2\nI1012 18:36:27.224693       1 event.go:291] \"Event occurred\" object=\"services-2178/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-d7whv\"\nI1012 18:36:27.241373       1 event.go:291] \"Event occurred\" object=\"services-2178/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-scrxw\"\nE1012 18:36:27.921522       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:28.209135       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-1322/ss2-5bbbc9fc94\" objectUID=ded5d4b2-e30b-4065-85ca-f5471a4e4438 kind=\"ControllerRevision\" virtual=false\nI1012 18:36:28.209413       1 stateful_set.go:440] StatefulSet has been deleted statefulset-1322/ss2\nI1012 18:36:28.209462       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-1322/ss2-677d6db895\" objectUID=7f57fb39-ab96-436b-9b7e-c783390bdcf9 kind=\"ControllerRevision\" virtual=false\nI1012 18:36:28.211507       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-1322/ss2-5bbbc9fc94\" objectUID=ded5d4b2-e30b-4065-85ca-f5471a4e4438 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:28.211811       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-1322/ss2-677d6db895\" objectUID=7f57fb39-ab96-436b-9b7e-c783390bdcf9 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:28.602616       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-3060/csi-hostpathhmzs7\"\nI1012 18:36:28.608598       1 pv_controller.go:640] volume \"pvc-38ad371e-5804-400c-a7db-b95a0c912075\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:28.610707       1 pv_controller.go:879] volume \"pvc-38ad371e-5804-400c-a7db-b95a0c912075\" entered phase \"Released\"\nI1012 18:36:28.613043       1 pv_controller.go:1340] isVolumeReleased[pvc-38ad371e-5804-400c-a7db-b95a0c912075]: volume is released\nI1012 18:36:28.614841       1 pv_controller.go:1340] isVolumeReleased[pvc-38ad371e-5804-400c-a7db-b95a0c912075]: volume is released\nI1012 18:36:28.657755       1 pv_controller_base.go:505] deletion of claim \"volume-expand-3060/csi-hostpathhmzs7\" was already processed\nI1012 18:36:28.677469       1 namespace_controller.go:185] Namespace has been deleted endpointslice-5183\nI1012 18:36:28.842546       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2621/pvc-nzh8z\"\nI1012 18:36:28.843142       1 event.go:291] \"Event occurred\" object=\"volume-6173/awsvsccv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:36:28.859079       1 pv_controller.go:640] volume \"local-ff7tc\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:28.861745       1 pv_controller.go:879] volume \"local-ff7tc\" entered phase \"Released\"\nI1012 18:36:28.896562       1 pv_controller_base.go:505] deletion of claim \"provisioning-2621/pvc-nzh8z\" was already processed\nI1012 18:36:28.966052       1 event.go:291] \"Event occurred\" object=\"volume-6173/awsvsccv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:36:29.012788       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:29.015963       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:29.022279       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:29.024967       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nI1012 18:36:29.043770       1 job_controller.go:406] enqueueing job job-9706/backofflimit\nE1012 18:36:29.090936       1 tokens_controller.go:262] error synchronizing serviceaccount job-9706/default: secrets \"default-token-qbn7q\" is forbidden: unable to create new content in namespace job-9706 because it is being terminated\nI1012 18:36:29.192407       1 namespace_controller.go:185] Namespace has been deleted pod-disks-2124\nE1012 18:36:29.261113       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-4721/pvc-q7pfw: storageclass.storage.k8s.io \"volume-4721\" not found\nI1012 18:36:29.261424       1 event.go:291] \"Event occurred\" object=\"volume-4721/pvc-q7pfw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-4721\\\" not found\"\nI1012 18:36:29.319637       1 pv_controller.go:879] volume \"local-x2xbr\" entered phase \"Available\"\nE1012 18:36:29.395408       1 tokens_controller.go:262] error synchronizing serviceaccount projected-4212/default: secrets \"default-token-ds2p7\" is forbidden: unable to create new content in namespace projected-4212 because it is being terminated\nE1012 18:36:29.854547       1 tokens_controller.go:262] error synchronizing serviceaccount apf-2135/default: secrets \"default-token-rxptl\" is forbidden: unable to create new content in namespace apf-2135 because it is being terminated\nI1012 18:36:29.866879       1 namespace_controller.go:185] Namespace has been deleted webhook-509\nE1012 18:36:29.900502       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8046/pvc-mlmzp: storageclass.storage.k8s.io \"provisioning-8046\" not found\nI1012 18:36:29.901102       1 event.go:291] \"Event occurred\" object=\"provisioning-8046/pvc-mlmzp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8046\\\" not found\"\nI1012 18:36:29.963792       1 pv_controller.go:879] volume \"local-gj2xk\" entered phase \"Available\"\nI1012 18:36:29.999880       1 namespace_controller.go:185] Namespace has been deleted container-probe-4404\nI1012 18:36:30.076454       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3336\nI1012 18:36:30.118986       1 namespace_controller.go:185] Namespace has been deleted provisioning-6739\nI1012 18:36:30.411488       1 namespace_controller.go:185] Namespace has been deleted projected-9148\nI1012 18:36:30.616782       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-451/up-down-2\" need=3 creating=3\nI1012 18:36:30.622432       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-2\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-2-m29vs\"\nI1012 18:36:30.631825       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-2\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-2-7trcf\"\nI1012 18:36:30.634494       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-2\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-2-gqxhk\"\nI1012 18:36:31.243125       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5934/pvc-zs48x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5934\\\" or manually created by system administrator\"\nI1012 18:36:31.257758       1 pv_controller.go:879] volume \"pvc-913791eb-c763-4d4a-8fb3-15528a813a51\" entered phase \"Bound\"\nI1012 18:36:31.257792       1 pv_controller.go:982] volume \"pvc-913791eb-c763-4d4a-8fb3-15528a813a51\" bound to claim \"csi-mock-volumes-5934/pvc-zs48x\"\nI1012 18:36:31.268892       1 pv_controller.go:823] claim \"csi-mock-volumes-5934/pvc-zs48x\" entered phase \"Bound\"\nE1012 18:36:31.317715       1 tokens_controller.go:262] error synchronizing serviceaccount pv-protection-9941/default: secrets \"default-token-vvhsq\" is forbidden: unable to create new content in namespace pv-protection-9941 because it is being terminated\nI1012 18:36:31.588326       1 namespace_controller.go:185] Namespace has been deleted gc-5397\nI1012 18:36:31.844637       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-574/csi-hostpathq4d4n\"\nI1012 18:36:31.884771       1 namespace_controller.go:185] Namespace has been deleted container-probe-1693\nI1012 18:36:31.905852       1 pv_controller.go:640] volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:31.920081       1 pv_controller.go:879] volume \"pvc-c530bd21-94f2-490c-8369-0f00b7d190da\" entered phase \"Released\"\nI1012 18:36:31.930659       1 pv_controller.go:1340] isVolumeReleased[pvc-c530bd21-94f2-490c-8369-0f00b7d190da]: volume is released\nW1012 18:36:31.931004       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-674/affinity-clusterip-timeout\", retrying. Error: EndpointSlice informer cache is out of date\nI1012 18:36:32.039747       1 pv_controller_base.go:505] deletion of claim \"provisioning-574/csi-hostpathq4d4n\" was already processed\nI1012 18:36:32.383896       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-376\nE1012 18:36:32.391256       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-1627/pvc-f64jp: storageclass.storage.k8s.io \"volume-1627\" not found\nI1012 18:36:32.391454       1 event.go:291] \"Event occurred\" object=\"volume-1627/pvc-f64jp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1627\\\" not found\"\nI1012 18:36:32.412499       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-847/test-quota\nE1012 18:36:32.456708       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5199/default: secrets \"default-token-wmvgh\" is forbidden: unable to create new content in namespace projected-5199 because it is being terminated\nI1012 18:36:32.460448       1 pv_controller.go:879] volume \"local-c2dth\" entered phase \"Available\"\nE1012 18:36:32.509839       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-847/default: secrets \"default-token-jp8th\" is forbidden: unable to create new content in namespace resourcequota-847 because it is being terminated\nI1012 18:36:32.537955       1 pv_controller.go:879] volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" entered phase \"Bound\"\nI1012 18:36:32.537994       1 pv_controller.go:982] volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" bound to claim \"volume-6173/awsvsccv\"\nI1012 18:36:32.568666       1 pv_controller.go:823] claim \"volume-6173/awsvsccv\" entered phase \"Bound\"\nI1012 18:36:32.601446       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-5205\nI1012 18:36:32.636116       1 garbagecollector.go:471] \"Processing object\" object=\"services-674/affinity-clusterip-timeout-75t7k\" objectUID=429c1159-eb6d-4a84-a24d-162d008c7ca8 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:32.644507       1 garbagecollector.go:580] \"Deleting object\" object=\"services-674/affinity-clusterip-timeout-75t7k\" objectUID=429c1159-eb6d-4a84-a24d-162d008c7ca8 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:33.008845       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-018fcedeba91e1c8b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nE1012 18:36:33.072602       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:36:33.460454       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-1322/default: secrets \"default-token-pmrck\" is forbidden: unable to create new content in namespace statefulset-1322 because it is being terminated\nI1012 18:36:33.472814       1 utils.go:366] couldn't find ipfamilies for headless service: services-2178/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI1012 18:36:34.181969       1 namespace_controller.go:185] Namespace has been deleted job-9706\nI1012 18:36:34.430842       1 namespace_controller.go:185] Namespace has been deleted projected-4212\nI1012 18:36:34.431100       1 pv_controller.go:930] claim \"volume-1627/pvc-f64jp\" bound to volume \"local-c2dth\"\nI1012 18:36:34.440438       1 pv_controller.go:879] volume \"local-c2dth\" entered phase \"Bound\"\nI1012 18:36:34.440469       1 pv_controller.go:982] volume \"local-c2dth\" bound to claim \"volume-1627/pvc-f64jp\"\nI1012 18:36:34.450789       1 pv_controller.go:823] claim \"volume-1627/pvc-f64jp\" entered phase \"Bound\"\nI1012 18:36:34.450963       1 pv_controller.go:930] claim \"volume-953/pvc-hzgbd\" bound to volume \"local-7lvpd\"\nI1012 18:36:34.469286       1 pv_controller.go:879] volume \"local-7lvpd\" entered phase \"Bound\"\nI1012 18:36:34.469331       1 pv_controller.go:982] volume \"local-7lvpd\" bound to claim \"volume-953/pvc-hzgbd\"\nI1012 18:36:34.478805       1 utils.go:366] couldn't find ipfamilies for headless service: services-2178/clusterip-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI1012 18:36:34.483790       1 pv_controller.go:823] claim \"volume-953/pvc-hzgbd\" entered phase \"Bound\"\nI1012 18:36:34.484107       1 pv_controller.go:930] claim \"provisioning-8046/pvc-mlmzp\" bound to volume \"local-gj2xk\"\nI1012 18:36:34.497253       1 pv_controller.go:879] volume \"local-gj2xk\" entered phase \"Bound\"\nI1012 18:36:34.497298       1 pv_controller.go:982] volume \"local-gj2xk\" bound to claim \"provisioning-8046/pvc-mlmzp\"\nI1012 18:36:34.505748       1 pv_controller.go:823] claim \"provisioning-8046/pvc-mlmzp\" entered phase \"Bound\"\nI1012 18:36:34.507458       1 pv_controller.go:930] claim \"provisioning-9572/pvc-7rp58\" bound to volume \"local-spjnr\"\nI1012 18:36:34.516370       1 pv_controller.go:879] volume \"local-spjnr\" entered phase \"Bound\"\nI1012 18:36:34.516412       1 pv_controller.go:982] volume \"local-spjnr\" bound to claim \"provisioning-9572/pvc-7rp58\"\nI1012 18:36:34.527345       1 pv_controller.go:823] claim \"provisioning-9572/pvc-7rp58\" entered phase \"Bound\"\nI1012 18:36:34.527534       1 pv_controller.go:930] claim \"provisioning-5374/pvc-pzkkp\" bound to volume \"local-np7b5\"\nI1012 18:36:34.542092       1 pv_controller.go:879] volume \"local-np7b5\" entered phase \"Bound\"\nI1012 18:36:34.542122       1 pv_controller.go:982] volume \"local-np7b5\" bound to claim \"provisioning-5374/pvc-pzkkp\"\nI1012 18:36:34.548963       1 pv_controller.go:823] claim \"provisioning-5374/pvc-pzkkp\" entered phase \"Bound\"\nI1012 18:36:34.549129       1 pv_controller.go:930] claim \"volume-4721/pvc-q7pfw\" bound to volume \"local-x2xbr\"\nI1012 18:36:34.559213       1 pv_controller.go:879] volume \"local-x2xbr\" entered phase \"Bound\"\nI1012 18:36:34.559246       1 pv_controller.go:982] volume \"local-x2xbr\" bound to claim \"volume-4721/pvc-q7pfw\"\nI1012 18:36:34.567705       1 pv_controller.go:823] claim \"volume-4721/pvc-q7pfw\" entered phase \"Bound\"\nE1012 18:36:34.634107       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2621/default: secrets \"default-token-48skc\" is forbidden: unable to create new content in namespace provisioning-2621 because it is being terminated\nI1012 18:36:34.976983       1 namespace_controller.go:185] Namespace has been deleted apf-2135\nI1012 18:36:35.376074       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-018fcedeba91e1c8b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:36:35.376329       1 event.go:291] \"Event occurred\" object=\"volume-6173/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\\\" \"\nI1012 18:36:35.519011       1 namespace_controller.go:185] Namespace has been deleted volume-3787\nI1012 18:36:35.601330       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-3012/test-rolling-update-controller\" need=1 creating=1\nI1012 18:36:35.606658       1 event.go:291] \"Event occurred\" object=\"deployment-3012/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-controller-fxq7b\"\nW1012 18:36:35.761058       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-cf5cd425-9c22-4b70-9983-4f476a339fef\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9775^73c8e37e-2b8a-11ec-a880-466a2b8c7551\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI1012 18:36:36.312687       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-cf5cd425-9c22-4b70-9983-4f476a339fef\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9775^73c8e37e-2b8a-11ec-a880-466a2b8c7551\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:36:36.363566       1 namespace_controller.go:185] Namespace has been deleted pv-protection-9941\nI1012 18:36:36.605630       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:36.668811       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:36:37.606422       1 namespace_controller.go:185] Namespace has been deleted resourcequota-847\nI1012 18:36:37.678401       1 namespace_controller.go:185] Namespace has been deleted projected-5199\nI1012 18:36:37.735611       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:36:37.864708       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE1012 18:36:37.891053       1 tokens_controller.go:262] error synchronizing serviceaccount services-674/default: secrets \"default-token-cmg7g\" is forbidden: unable to create new content in namespace services-674 because it is being terminated\nI1012 18:36:38.629725       1 garbagecollector.go:471] \"Processing object\" object=\"services-2178/externalsvc-d7whv\" objectUID=8f08bb46-87f3-4786-ae57-26df4fa49360 kind=\"Pod\" virtual=false\nI1012 18:36:38.630127       1 garbagecollector.go:471] \"Processing object\" object=\"services-2178/externalsvc-scrxw\" objectUID=ed8f4217-c1e6-4c8f-b658-a261488f6dc5 kind=\"Pod\" virtual=false\nI1012 18:36:38.641996       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2178/externalsvc-d7whv\" objectUID=8f08bb46-87f3-4786-ae57-26df4fa49360 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:38.643991       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2178/externalsvc-scrxw\" objectUID=ed8f4217-c1e6-4c8f-b658-a261488f6dc5 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:38.669821       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"services-2178/externalsvc\" err=\"Operation cannot be fulfilled on endpoints \\\"externalsvc\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:36:38.670506       1 event.go:291] \"Event occurred\" object=\"services-2178/externalsvc\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-2178/externalsvc: Operation cannot be fulfilled on endpoints \\\"externalsvc\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:36:38.686409       1 namespace_controller.go:185] Namespace has been deleted statefulset-1322\nI1012 18:36:38.788281       1 graph_builder.go:587] add [v1/Pod, namespace: csi-mock-volumes-7892, name: inline-volume-f9dgh, uid: 8b38da99-8fd9-4d28-ba2d-0dff3ec8782b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:36:38.788372       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7892/inline-volume-f9dgh\" objectUID=8b38da99-8fd9-4d28-ba2d-0dff3ec8782b kind=\"Pod\" virtual=false\nI1012 18:36:38.790485       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: csi-mock-volumes-7892, name: inline-volume-f9dgh, uid: 8b38da99-8fd9-4d28-ba2d-0dff3ec8782b]\nE1012 18:36:38.924104       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-3060/default: secrets \"default-token-bll9x\" is forbidden: unable to create new content in namespace volume-expand-3060 because it is being terminated\nE1012 18:36:38.929464       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:39.052769       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:36:39.052933       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\\\" \"\nI1012 18:36:39.667700       1 namespace_controller.go:185] Namespace has been deleted provisioning-2621\nE1012 18:36:39.806387       1 tokens_controller.go:262] error synchronizing serviceaccount crd-publish-openapi-3295/default: secrets \"default-token-vwwv4\" is forbidden: unable to create new content in namespace crd-publish-openapi-3295 because it is being terminated\nI1012 18:36:39.823717       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-3012/test-rolling-update-deployment-585b757574\" need=1 creating=1\nI1012 18:36:39.824698       1 event.go:291] \"Event occurred\" object=\"deployment-3012/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-deployment-585b757574 to 1\"\nI1012 18:36:39.830056       1 event.go:291] \"Event occurred\" object=\"deployment-3012/test-rolling-update-deployment-585b757574\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-deployment-585b757574-7l8m9\"\nI1012 18:36:39.840796       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3012/test-rolling-update-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1012 18:36:40.362223       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8874/default: secrets \"default-token-ffs6w\" is forbidden: unable to create new content in namespace provisioning-8874 because it is being terminated\nE1012 18:36:40.582747       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-6417/default: secrets \"default-token-fzrlj\" is forbidden: unable to create new content in namespace emptydir-6417 because it is being terminated\nI1012 18:36:40.923831       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8046/pvc-mlmzp\"\nI1012 18:36:40.930079       1 pv_controller.go:640] volume \"local-gj2xk\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:40.933906       1 pv_controller.go:879] volume \"local-gj2xk\" entered phase \"Released\"\nI1012 18:36:40.981878       1 pv_controller_base.go:505] deletion of claim \"provisioning-8046/pvc-mlmzp\" was already processed\nI1012 18:36:41.832536       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-3060-8025/csi-hostpathplugin\nI1012 18:36:41.832652       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-3060-8025/csi-hostpathplugin-0\" objectUID=4b314257-4938-4794-9c47-acc0a03e9d51 kind=\"Pod\" virtual=false\nI1012 18:36:41.832736       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-3060-8025/csi-hostpathplugin-869c9444c4\" objectUID=30520c58-f7e6-4d8d-ba25-61f7fab7e402 kind=\"ControllerRevision\" virtual=false\nI1012 18:36:41.835218       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-3060-8025/csi-hostpathplugin-869c9444c4\" objectUID=30520c58-f7e6-4d8d-ba25-61f7fab7e402 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:41.835506       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-3060-8025/csi-hostpathplugin-0\" objectUID=4b314257-4938-4794-9c47-acc0a03e9d51 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:41.898312       1 garbagecollector.go:471] \"Processing object\" object=\"services-7577/nodeport-update-service-8qmfg\" objectUID=b09bce93-e34b-4356-9a83-f30fd4f88c22 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:41.904101       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7577/nodeport-update-service-8qmfg\" objectUID=b09bce93-e34b-4356-9a83-f30fd4f88c22 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:42.428689       1 namespace_controller.go:185] Namespace has been deleted apply-7370\nI1012 18:36:43.028039       1 namespace_controller.go:185] Namespace has been deleted services-674\nI1012 18:36:43.995399       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3060\nW1012 18:36:44.078136       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-2178/externalsvc\", retrying. Error: EndpointSlice informer cache is out of date\nI1012 18:36:44.188333       1 garbagecollector.go:471] \"Processing object\" object=\"services-2178/externalsvc-w22xn\" objectUID=bf414df4-adc8-4582-ba83-cef0505bf938 kind=\"EndpointSlice\" virtual=false\nI1012 18:36:44.193565       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2178/externalsvc-w22xn\" objectUID=bf414df4-adc8-4582-ba83-cef0505bf938 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:44.251076       1 garbagecollector.go:471] \"Processing object\" object=\"services-2178/clusterip-service-ccwgk\" objectUID=eb976643-2080-4026-8c4f-83a1eb0a0a2c kind=\"EndpointSlice\" virtual=false\nI1012 18:36:44.251282       1 garbagecollector.go:471] \"Processing object\" object=\"services-2178/clusterip-service-rllfw\" objectUID=6fb3ac48-5c0c-4916-807f-997ea96a041b kind=\"EndpointSlice\" virtual=false\nI1012 18:36:44.255288       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2178/clusterip-service-rllfw\" objectUID=6fb3ac48-5c0c-4916-807f-997ea96a041b kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:44.257074       1 garbagecollector.go:580] \"Deleting object\" object=\"services-2178/clusterip-service-ccwgk\" objectUID=eb976643-2080-4026-8c4f-83a1eb0a0a2c kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:36:44.326676       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-4721/pvc-q7pfw\"\nI1012 18:36:44.333155       1 pv_controller.go:640] volume \"local-x2xbr\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:44.336691       1 pv_controller.go:879] volume \"local-x2xbr\" entered phase \"Released\"\nI1012 18:36:44.381155       1 pv_controller_base.go:505] deletion of claim \"volume-4721/pvc-q7pfw\" was already processed\nI1012 18:36:44.677233       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-3012/test-rolling-update-controller\" need=0 deleting=1\nI1012 18:36:44.678090       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-3012/test-rolling-update-controller\" relatedReplicaSets=[test-rolling-update-controller test-rolling-update-deployment-585b757574]\nI1012 18:36:44.678026       1 event.go:291] \"Event occurred\" object=\"deployment-3012/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-controller to 0\"\nI1012 18:36:44.678438       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-controller\" pod=\"deployment-3012/test-rolling-update-controller-fxq7b\"\nI1012 18:36:44.689706       1 event.go:291] \"Event occurred\" object=\"deployment-3012/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-controller-fxq7b\"\nI1012 18:36:44.958822       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-3295\nI1012 18:36:45.131208       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-574-6125/csi-hostpathplugin-0\" objectUID=cb2180f3-bd39-4fc2-8c83-48420741ac8c kind=\"Pod\" virtual=false\nI1012 18:36:45.131534       1 stateful_set.go:440] StatefulSet has been deleted provisioning-574-6125/csi-hostpathplugin\nI1012 18:36:45.131578       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-574-6125/csi-hostpathplugin-6cf6795587\" objectUID=393db78d-e41e-45db-b13e-7299463ecaa1 kind=\"ControllerRevision\" virtual=false\nI1012 18:36:45.136937       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-574-6125/csi-hostpathplugin-6cf6795587\" objectUID=393db78d-e41e-45db-b13e-7299463ecaa1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:45.137395       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-574-6125/csi-hostpathplugin-0\" objectUID=cb2180f3-bd39-4fc2-8c83-48420741ac8c kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:45.401446       1 namespace_controller.go:185] Namespace has been deleted provisioning-8874\nI1012 18:36:45.544126       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3463/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI1012 18:36:45.544561       1 event.go:291] \"Event occurred\" object=\"webhook-3463/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI1012 18:36:45.555145       1 event.go:291] \"Event occurred\" object=\"webhook-3463/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-5kjch\"\nI1012 18:36:45.560892       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3463/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:36:45.666508       1 namespace_controller.go:185] Namespace has been deleted emptydir-6417\nE1012 18:36:46.770654       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-9308/pvc-kcgbc: storageclass.storage.k8s.io \"volume-9308\" not found\nI1012 18:36:46.771240       1 event.go:291] \"Event occurred\" object=\"volume-9308/pvc-kcgbc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9308\\\" not found\"\nI1012 18:36:46.826052       1 pv_controller.go:879] volume \"aws-qnrx5\" entered phase \"Available\"\nE1012 18:36:46.988597       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-3060-8025/default: secrets \"default-token-bnnhh\" is forbidden: unable to create new content in namespace volume-expand-3060-8025 because it is being terminated\nE1012 18:36:47.045914       1 tokens_controller.go:262] error synchronizing serviceaccount services-7577/default: secrets \"default-token-2qmjr\" is forbidden: unable to create new content in namespace services-7577 because it is being terminated\nI1012 18:36:47.141919       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-7577/nodeport-update-service\" need=2 creating=1\nI1012 18:36:47.159460       1 garbagecollector.go:471] \"Processing object\" object=\"services-7577/nodeport-update-service-crvnh\" objectUID=36e13ea8-e77f-4815-8a33-ad099d1fd015 kind=\"Pod\" virtual=false\nI1012 18:36:47.159727       1 garbagecollector.go:471] \"Processing object\" object=\"services-7577/nodeport-update-service-xfhng\" objectUID=685f6e94-a170-4a72-a883-71449ea89932 kind=\"Pod\" virtual=false\nE1012 18:36:47.204171       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:47.225739       1 namespace_controller.go:185] Namespace has been deleted provisioning-574\nI1012 18:36:47.226666       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-1057/test-rs\" need=1 creating=1\nI1012 18:36:47.241501       1 event.go:291] \"Event occurred\" object=\"replicaset-1057/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-vkqw2\"\nE1012 18:36:47.276187       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nE1012 18:36:47.405109       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:47.410502       1 namespace_controller.go:185] Namespace has been deleted nettest-1044\nE1012 18:36:47.528844       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nE1012 18:36:47.655987       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:47.718799       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6338/pvc-rjrw9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6338\\\" or manually created by system administrator\"\nI1012 18:36:47.735982       1 pv_controller.go:879] volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" entered phase \"Bound\"\nI1012 18:36:47.736174       1 pv_controller.go:982] volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" bound to claim \"csi-mock-volumes-6338/pvc-rjrw9\"\nI1012 18:36:47.752590       1 pv_controller.go:823] claim \"csi-mock-volumes-6338/pvc-rjrw9\" entered phase \"Bound\"\nI1012 18:36:47.755777       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5374/pvc-pzkkp\"\nI1012 18:36:47.761920       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-5934/pvc-zs48x\"\nI1012 18:36:47.762148       1 pv_controller.go:640] volume \"local-np7b5\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:47.767521       1 pv_controller.go:879] volume \"local-np7b5\" entered phase \"Released\"\nI1012 18:36:47.773127       1 pv_controller.go:640] volume \"pvc-913791eb-c763-4d4a-8fb3-15528a813a51\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:47.778552       1 pv_controller.go:879] volume \"pvc-913791eb-c763-4d4a-8fb3-15528a813a51\" entered phase \"Released\"\nI1012 18:36:47.789080       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-5934/pvc-zs48x\" was already processed\nI1012 18:36:47.814173       1 pv_controller_base.go:505] deletion of claim \"provisioning-5374/pvc-pzkkp\" was already processed\nE1012 18:36:47.865113       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:47.984857       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6338^4\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nE1012 18:36:48.128899       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nE1012 18:36:48.432873       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:48.465762       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9572/pvc-7rp58\"\nI1012 18:36:48.472885       1 pv_controller.go:640] volume \"local-spjnr\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:48.478482       1 pv_controller.go:879] volume \"local-spjnr\" entered phase \"Released\"\nI1012 18:36:48.524457       1 pv_controller_base.go:505] deletion of claim \"provisioning-9572/pvc-7rp58\" was already processed\nI1012 18:36:48.544670       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6338^4\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:36:48.545062       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6338/pvc-volume-tester-w7sqj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\\\" \"\nE1012 18:36:48.824976       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:36:48.907269       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:49.202284       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-3192/inline-volume-tester2-s7klr\" PVC=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nI1012 18:36:49.202308       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nE1012 18:36:49.397190       1 tokens_controller.go:262] error synchronizing serviceaccount services-2178/default: serviceaccounts \"default\" not found\nI1012 18:36:49.407200       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\"\nI1012 18:36:49.419515       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester2-s7klr\" objectUID=a942c22b-9ed5-4580-a125-9c274cc94035 kind=\"Pod\" virtual=false\nI1012 18:36:49.425841       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester2-s7klr, uid: a942c22b-9ed5-4580-a125-9c274cc94035]\nI1012 18:36:49.426165       1 pv_controller.go:640] volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:49.430730       1 pv_controller.go:879] volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" entered phase \"Released\"\nI1012 18:36:49.431514       1 pv_controller.go:930] claim \"volume-9308/pvc-kcgbc\" bound to volume \"aws-qnrx5\"\nI1012 18:36:49.439668       1 pv_controller.go:1340] isVolumeReleased[pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37]: volume is released\nI1012 18:36:49.448202       1 pv_controller.go:879] volume \"aws-qnrx5\" entered phase \"Bound\"\nI1012 18:36:49.448230       1 pv_controller.go:982] volume \"aws-qnrx5\" bound to claim \"volume-9308/pvc-kcgbc\"\nI1012 18:36:49.467082       1 pv_controller.go:823] claim \"volume-9308/pvc-kcgbc\" entered phase \"Bound\"\nI1012 18:36:49.467987       1 pv_controller_base.go:505] deletion of claim \"ephemeral-3192/inline-volume-tester2-s7klr-my-volume-0\" was already processed\nE1012 18:36:49.670183       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nE1012 18:36:50.421205       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-6729/pvc-7xsn5: storageclass.storage.k8s.io \"volume-6729\" not found\nI1012 18:36:50.421531       1 event.go:291] \"Event occurred\" object=\"volume-6729/pvc-7xsn5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6729\\\" not found\"\nI1012 18:36:50.477607       1 pv_controller.go:879] volume \"aws-z6jlp\" entered phase \"Available\"\nI1012 18:36:50.725614       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester-bgcck, uid: b74f82c8-98a0-467a-bcc5-baebe7366acc] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:36:50.725690       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" objectUID=e346ea1b-f300-4d0f-b9db-519c7c2700b2 kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:36:50.726190       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester-bgcck\" objectUID=b74f82c8-98a0-467a-bcc5-baebe7366acc kind=\"Pod\" virtual=false\nI1012 18:36:50.730324       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-3192, name: inline-volume-tester-bgcck-my-volume-0, uid: e346ea1b-f300-4d0f-b9db-519c7c2700b2] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester-bgcck, uid: b74f82c8-98a0-467a-bcc5-baebe7366acc] is deletingDependents\nI1012 18:36:50.731609       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" objectUID=e346ea1b-f300-4d0f-b9db-519c7c2700b2 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI1012 18:36:50.736552       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-3192/inline-volume-tester-bgcck\" PVC=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:36:50.736576       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:36:50.736675       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" objectUID=e346ea1b-f300-4d0f-b9db-519c7c2700b2 kind=\"PersistentVolumeClaim\" virtual=false\nE1012 18:36:50.940390       1 tokens_controller.go:262] error synchronizing serviceaccount volume-4721/default: secrets \"default-token-jcf7j\" is forbidden: unable to create new content in namespace volume-4721 because it is being terminated\nI1012 18:36:51.046054       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^39ad1ade-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:36:51.057084       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^39ad1ade-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:36:51.167630       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nI1012 18:36:51.248441       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") from node \"ip-172-20-59-223.us-west-1.compute.internal\" \nE1012 18:36:51.399642       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-3012/default: secrets \"default-token-8r586\" is forbidden: unable to create new content in namespace deployment-3012 because it is being terminated\nI1012 18:36:51.456467       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-3012/test-rolling-update-deployment-585b757574-7l8m9\" objectUID=4dd146b1-25bf-476d-9d20-9474722a59a6 kind=\"Pod\" virtual=false\nI1012 18:36:51.461188       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-3012/test-rolling-update-deployment-585b757574-7l8m9\" objectUID=4dd146b1-25bf-476d-9d20-9474722a59a6 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:51.539689       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-3012/test-rolling-update-deployment\"\nI1012 18:36:51.644059       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ebec9b1f-8a87-4cd5-af8e-a2253883cc37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^39ad1ade-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:36:51.652378       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-953/pvc-hzgbd\"\nI1012 18:36:51.659830       1 pv_controller.go:640] volume \"local-7lvpd\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:51.663354       1 pv_controller.go:879] volume \"local-7lvpd\" entered phase \"Released\"\nI1012 18:36:51.718570       1 pv_controller_base.go:505] deletion of claim \"volume-953/pvc-hzgbd\" was already processed\nE1012 18:36:51.953309       1 tokens_controller.go:262] error synchronizing serviceaccount pod-network-test-7590/default: secrets \"default-token-bbkj7\" is forbidden: unable to create new content in namespace pod-network-test-7590 because it is being terminated\nI1012 18:36:52.275044       1 namespace_controller.go:185] Namespace has been deleted volume-expand-3060-8025\nE1012 18:36:52.469764       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2954/pvc-fvczh: storageclass.storage.k8s.io \"provisioning-2954\" not found\nI1012 18:36:52.470268       1 event.go:291] \"Event occurred\" object=\"provisioning-2954/pvc-fvczh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2954\\\" not found\"\nI1012 18:36:52.524789       1 pv_controller.go:879] volume \"local-9p9gp\" entered phase \"Available\"\nI1012 18:36:52.981949       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") from node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:36:52.982122       1 event.go:291] \"Event occurred\" object=\"volume-9308/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-qnrx5\\\" \"\nE1012 18:36:53.115106       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5934/default: secrets \"default-token-dp7bt\" is forbidden: unable to create new content in namespace csi-mock-volumes-5934 because it is being terminated\nI1012 18:36:53.248196       1 namespace_controller.go:185] Namespace has been deleted provisioning-8046\nI1012 18:36:53.486669       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-1057/test-rs\" need=2 creating=1\nI1012 18:36:53.489762       1 event.go:291] \"Event occurred\" object=\"replicaset-1057/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-wgr8b\"\nI1012 18:36:53.591645       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-1057/test-rs\" need=4 creating=2\nI1012 18:36:53.596759       1 event.go:291] \"Event occurred\" object=\"replicaset-1057/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-wd6bw\"\nI1012 18:36:53.596960       1 namespace_controller.go:185] Namespace has been deleted request-timeout-6483\nI1012 18:36:53.612268       1 event.go:291] \"Event occurred\" object=\"replicaset-1057/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-cq6js\"\nI1012 18:36:53.644611       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-9029/csi-hostpathjsnk6\"\nI1012 18:36:53.651798       1 pv_controller.go:640] volume \"pvc-62cc9407-d61d-49af-baca-4f66937123b5\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:36:53.655545       1 pv_controller.go:879] volume \"pvc-62cc9407-d61d-49af-baca-4f66937123b5\" entered phase \"Released\"\nI1012 18:36:53.657337       1 pv_controller.go:1340] isVolumeReleased[pvc-62cc9407-d61d-49af-baca-4f66937123b5]: volume is released\nI1012 18:36:53.673808       1 pv_controller_base.go:505] deletion of claim \"volume-expand-9029/csi-hostpathjsnk6\" was already processed\nE1012 18:36:53.934619       1 namespace_controller.go:162] deletion of namespace services-7577 failed: unexpected items still remain in namespace: services-7577 for gvr: /v1, Resource=pods\nE1012 18:36:54.001203       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5374/default: secrets \"default-token-mvfn8\" is forbidden: unable to create new content in namespace provisioning-5374 because it is being terminated\nE1012 18:36:54.189928       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9572/default: secrets \"default-token-pzdsl\" is forbidden: unable to create new content in namespace provisioning-9572 because it is being terminated\nI1012 18:36:54.566353       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2000-9318/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:36:54.620879       1 namespace_controller.go:185] Namespace has been deleted services-2178\nE1012 18:36:55.184683       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:36:55.480092       1 namespace_controller.go:185] Namespace has been deleted provisioning-574-6125\nI1012 18:36:55.529507       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5934-9147/csi-mockplugin-8f5fbd76\" objectUID=ee3b3ae5-69a9-4b43-a2a3-a5faa5c875c1 kind=\"ControllerRevision\" virtual=false\nI1012 18:36:55.529910       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5934-9147/csi-mockplugin\nI1012 18:36:55.530053       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5934-9147/csi-mockplugin-0\" objectUID=988f28b5-d803-4900-be4d-370313a204d3 kind=\"Pod\" virtual=false\nI1012 18:36:55.532146       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5934-9147/csi-mockplugin-8f5fbd76\" objectUID=ee3b3ae5-69a9-4b43-a2a3-a5faa5c875c1 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:36:55.533022       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5934-9147/csi-mockplugin-0\" objectUID=988f28b5-d803-4900-be4d-370313a204d3 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:55.886687       1 namespace_controller.go:185] Namespace has been deleted pods-4841\nI1012 18:36:55.897862       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-6418\nI1012 18:36:56.159813       1 namespace_controller.go:185] Namespace has been deleted volume-4721\nI1012 18:36:56.565971       1 namespace_controller.go:185] Namespace has been deleted deployment-3012\nI1012 18:36:56.748716       1 pv_controller.go:879] volume \"local-pv4swzt\" entered phase \"Available\"\nI1012 18:36:56.796368       1 pv_controller.go:930] claim \"persistent-local-volumes-test-6239/pvc-wnh8n\" bound to volume \"local-pv4swzt\"\nI1012 18:36:56.804361       1 pv_controller.go:879] volume \"local-pv4swzt\" entered phase \"Bound\"\nI1012 18:36:56.804430       1 pv_controller.go:982] volume \"local-pv4swzt\" bound to claim \"persistent-local-volumes-test-6239/pvc-wnh8n\"\nI1012 18:36:56.813775       1 pv_controller.go:823] claim \"persistent-local-volumes-test-6239/pvc-wnh8n\" entered phase \"Bound\"\nE1012 18:36:56.825895       1 tokens_controller.go:262] error synchronizing serviceaccount flexvolume-9861/default: secrets \"default-token-hbjwp\" is forbidden: unable to create new content in namespace flexvolume-9861 because it is being terminated\nI1012 18:36:57.047625       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6239/pvc-wnh8n\"\nI1012 18:36:57.057892       1 pv_controller.go:640] volume \"local-pv4swzt\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:36:57.061685       1 pv_controller.go:879] volume \"local-pv4swzt\" entered phase \"Released\"\nI1012 18:36:57.099258       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-6239/pvc-wnh8n\" was already processed\nI1012 18:36:58.196237       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5934\nI1012 18:36:58.848275       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-1057/test-rs-vkqw2\" objectUID=b25b1d09-708b-4ab5-85b0-b610561503c0 kind=\"Pod\" virtual=false\nI1012 18:36:58.848718       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-1057/test-rs-wgr8b\" objectUID=aac6e575-d81f-4990-aa34-17bb0af19829 kind=\"Pod\" virtual=false\nI1012 18:36:58.849133       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-1057/test-rs-cq6js\" objectUID=360b3bd0-7058-4b10-82e6-44166fa9afa2 kind=\"Pod\" virtual=false\nI1012 18:36:58.848718       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-1057/test-rs-wd6bw\" objectUID=8f9ca253-06f8-4d31-b36f-c54948f335af kind=\"Pod\" virtual=false\nI1012 18:36:58.853776       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-1057/test-rs-vkqw2\" objectUID=b25b1d09-708b-4ab5-85b0-b610561503c0 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:58.863800       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-1057/test-rs-wd6bw\" objectUID=8f9ca253-06f8-4d31-b36f-c54948f335af kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:58.863857       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-1057/test-rs-cq6js\" objectUID=360b3bd0-7058-4b10-82e6-44166fa9afa2 kind=\"Pod\" propagationPolicy=Background\nI1012 18:36:58.863895       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-1057/test-rs-wgr8b\" objectUID=aac6e575-d81f-4990-aa34-17bb0af19829 kind=\"Pod\" propagationPolicy=Background\nE1012 18:36:58.866528       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-9263/default: secrets \"default-token-wdfct\" is forbidden: unable to create new content in namespace kubectl-9263 because it is being terminated\nI1012 18:36:59.064915       1 namespace_controller.go:185] Namespace has been deleted provisioning-5374\nI1012 18:36:59.298001       1 namespace_controller.go:185] Namespace has been deleted provisioning-9572\nI1012 18:36:59.678429       1 namespace_controller.go:185] Namespace has been deleted services-9954\nI1012 18:36:59.832326       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Available\"\nI1012 18:36:59.880996       1 pv_controller.go:930] claim \"pv-9216/pvc-75v8q\" bound to volume \"nfs-lt4zg\"\nI1012 18:36:59.887800       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Bound\"\nI1012 18:36:59.887848       1 pv_controller.go:982] volume \"nfs-lt4zg\" bound to claim \"pv-9216/pvc-75v8q\"\nI1012 18:36:59.895389       1 pv_controller.go:823] claim \"pv-9216/pvc-75v8q\" entered phase \"Bound\"\nI1012 18:36:59.958717       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2000/pvc-6pc6k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:00.021197       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2000/pvc-6pc6k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2000\\\" or manually created by system administrator\"\nE1012 18:37:00.034895       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:00.085598       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2000/pvc-6pc6k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod pvc-volume-tester-gfnfr to be scheduled\"\nE1012 18:37:00.727499       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5934-9147/default: secrets \"default-token-gzl6g\" is forbidden: unable to create new content in namespace csi-mock-volumes-5934-9147 because it is being terminated\nI1012 18:37:01.116182       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1627/pvc-f64jp\"\nI1012 18:37:01.122365       1 pv_controller.go:640] volume \"local-c2dth\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:01.128563       1 pv_controller.go:879] volume \"local-c2dth\" entered phase \"Released\"\nI1012 18:37:01.170674       1 pv_controller_base.go:505] deletion of claim \"volume-1627/pvc-f64jp\" was already processed\nE1012 18:37:01.290980       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:01.861431       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-9029-2193/csi-hostpathplugin-0\" objectUID=f0292bb7-4d3b-48d3-877d-f7aee7527b9f kind=\"Pod\" virtual=false\nI1012 18:37:01.861776       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-9029-2193/csi-hostpathplugin\nI1012 18:37:01.862406       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-9029-2193/csi-hostpathplugin-677cdf457b\" objectUID=f5e15da9-6012-415f-95ee-3218cdc8ae45 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:01.864286       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-9029-2193/csi-hostpathplugin-0\" objectUID=f0292bb7-4d3b-48d3-877d-f7aee7527b9f kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:01.864690       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-9029-2193/csi-hostpathplugin-677cdf457b\" objectUID=f5e15da9-6012-415f-95ee-3218cdc8ae45 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:01.916448       1 namespace_controller.go:185] Namespace has been deleted flexvolume-9861\nI1012 18:37:02.453756       1 namespace_controller.go:185] Namespace has been deleted volume-953\nI1012 18:37:02.531028       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-7417/agnhost-primary\" need=1 creating=1\nI1012 18:37:02.536850       1 event.go:291] \"Event occurred\" object=\"kubectl-7417/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-gj9kp\"\nI1012 18:37:02.599399       1 namespace_controller.go:185] Namespace has been deleted prestop-9283\nE1012 18:37:02.683514       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-6239/default: secrets \"default-token-dzld2\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-6239 because it is being terminated\nI1012 18:37:03.015152       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2000/pvc-6pc6k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2000\\\" or manually created by system administrator\"\nI1012 18:37:03.140977       1 pv_controller.go:879] volume \"pvc-d82a39b2-998d-45a0-a002-96d29d347453\" entered phase \"Bound\"\nI1012 18:37:03.141013       1 pv_controller.go:982] volume \"pvc-d82a39b2-998d-45a0-a002-96d29d347453\" bound to claim \"csi-mock-volumes-2000/pvc-6pc6k\"\nI1012 18:37:03.150648       1 pv_controller.go:823] claim \"csi-mock-volumes-2000/pvc-6pc6k\" entered phase \"Bound\"\nI1012 18:37:03.344284       1 namespace_controller.go:185] Namespace has been deleted emptydir-1044\nI1012 18:37:03.619914       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6338^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:03.623982       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6338^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:04.106653       1 namespace_controller.go:185] Namespace has been deleted replicaset-1057\nI1012 18:37:04.114876       1 namespace_controller.go:185] Namespace has been deleted kubectl-9263\nI1012 18:37:04.138049       1 namespace_controller.go:185] Namespace has been deleted volume-expand-9029\nI1012 18:37:04.157175       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6338^4\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:04.208752       1 namespace_controller.go:185] Namespace has been deleted services-7577\nI1012 18:37:04.389538       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6338/pvc-rjrw9\"\nI1012 18:37:04.394894       1 pv_controller.go:640] volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:04.398253       1 pv_controller.go:879] volume \"pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69\" entered phase \"Released\"\nI1012 18:37:04.399939       1 pv_controller.go:1340] isVolumeReleased[pvc-976f9768-892b-4b6c-8e67-b0e1b97b2b69]: volume is released\nI1012 18:37:04.421130       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-6338/pvc-rjrw9\" was already processed\nI1012 18:37:04.432443       1 pv_controller.go:930] claim \"provisioning-2954/pvc-fvczh\" bound to volume \"local-9p9gp\"\nI1012 18:37:04.440403       1 pv_controller.go:879] volume \"local-9p9gp\" entered phase \"Bound\"\nI1012 18:37:04.440565       1 pv_controller.go:982] volume \"local-9p9gp\" bound to claim \"provisioning-2954/pvc-fvczh\"\nI1012 18:37:04.452967       1 pv_controller.go:823] claim \"provisioning-2954/pvc-fvczh\" entered phase \"Bound\"\nI1012 18:37:04.453349       1 pv_controller.go:930] claim \"volume-6729/pvc-7xsn5\" bound to volume \"aws-z6jlp\"\nI1012 18:37:04.470716       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-9216/pvc-75v8q\"\nI1012 18:37:04.471967       1 pv_controller.go:879] volume \"aws-z6jlp\" entered phase \"Bound\"\nI1012 18:37:04.471996       1 pv_controller.go:982] volume \"aws-z6jlp\" bound to claim \"volume-6729/pvc-7xsn5\"\nI1012 18:37:04.480333       1 pv_controller.go:823] claim \"volume-6729/pvc-7xsn5\" entered phase \"Bound\"\nI1012 18:37:04.482564       1 pv_controller.go:640] volume \"nfs-lt4zg\" is released and reclaim policy \"Recycle\" will be executed\nI1012 18:37:04.486218       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Released\"\nI1012 18:37:04.487757       1 pv_controller.go:1340] isVolumeReleased[nfs-lt4zg]: volume is released\nI1012 18:37:04.508022       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully assigned default/recycler-for-nfs-lt4zg to ip-172-20-37-53.us-west-1.compute.internal\"\nI1012 18:37:04.657108       1 garbagecollector.go:471] \"Processing object\" object=\"services-451/up-down-1-qgcz4\" objectUID=8441c791-ddff-47b5-bd31-b28bc008fd8c kind=\"Pod\" virtual=false\nI1012 18:37:04.657447       1 garbagecollector.go:471] \"Processing object\" object=\"services-451/up-down-1-wnpdx\" objectUID=8c4637bc-fd2b-40d8-880d-fc35045916b4 kind=\"Pod\" virtual=false\nI1012 18:37:04.657625       1 garbagecollector.go:471] \"Processing object\" object=\"services-451/up-down-1-l87cl\" objectUID=af241151-579a-41b3-96e4-2135cf729cc3 kind=\"Pod\" virtual=false\nI1012 18:37:04.660327       1 garbagecollector.go:580] \"Deleting object\" object=\"services-451/up-down-1-wnpdx\" objectUID=8c4637bc-fd2b-40d8-880d-fc35045916b4 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:04.660557       1 garbagecollector.go:580] \"Deleting object\" object=\"services-451/up-down-1-qgcz4\" objectUID=8441c791-ddff-47b5-bd31-b28bc008fd8c kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:04.660751       1 garbagecollector.go:580] \"Deleting object\" object=\"services-451/up-down-1-l87cl\" objectUID=af241151-579a-41b3-96e4-2135cf729cc3 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:05.128672       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-z6jlp\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ddfb67e367921bfa\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:05.154202       1 namespace_controller.go:185] Namespace has been deleted configmap-8211\nI1012 18:37:05.786879       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Pulling image \\\"busybox:1.27\\\"\"\nI1012 18:37:05.831122       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5934-9147\nE1012 18:37:06.651200       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-1211/pvc-dl2rw: storageclass.storage.k8s.io \"provisioning-1211\" not found\nI1012 18:37:06.651412       1 event.go:291] \"Event occurred\" object=\"provisioning-1211/pvc-dl2rw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1211\\\" not found\"\nI1012 18:37:06.705109       1 pv_controller.go:879] volume \"local-rnpqr\" entered phase \"Available\"\nE1012 18:37:06.884464       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-923/pvc-vrq26: storageclass.storage.k8s.io \"provisioning-923\" not found\nI1012 18:37:06.884660       1 event.go:291] \"Event occurred\" object=\"provisioning-923/pvc-vrq26\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-923\\\" not found\"\nI1012 18:37:06.942633       1 pv_controller.go:879] volume \"local-p76jb\" entered phase \"Available\"\nI1012 18:37:07.428326       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-7590\nI1012 18:37:07.447923       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-z6jlp\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ddfb67e367921bfa\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:07.448084       1 event.go:291] \"Event occurred\" object=\"volume-6729/exec-volume-test-preprovisionedpv-mmdr\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-z6jlp\\\" \"\nI1012 18:37:07.790274       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6239\nI1012 18:37:07.817527       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully pulled image \\\"busybox:1.27\\\" in 2.031239544s\"\nI1012 18:37:07.896222       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Created container pv-recycler\"\nI1012 18:37:08.018812       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Started container pv-recycler\"\nI1012 18:37:08.078276       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3463/e2e-test-webhook-mrcq2\" objectUID=b299aac9-1f65-45f8-a199-93f39621a175 kind=\"EndpointSlice\" virtual=false\nI1012 18:37:08.096215       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3463/e2e-test-webhook-mrcq2\" objectUID=b299aac9-1f65-45f8-a199-93f39621a175 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:37:08.141619       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3463/sample-webhook-deployment-78988fc6cd\" objectUID=53558a20-6a95-443e-8a16-b2cff3d654b7 kind=\"ReplicaSet\" virtual=false\nI1012 18:37:08.141889       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3463/sample-webhook-deployment\"\nI1012 18:37:08.148441       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3463/sample-webhook-deployment-78988fc6cd\" objectUID=53558a20-6a95-443e-8a16-b2cff3d654b7 kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:37:08.158619       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3463/sample-webhook-deployment-78988fc6cd-5kjch\" objectUID=d602a692-cf41-4d63-8d06-51ad59620ee3 kind=\"Pod\" virtual=false\nI1012 18:37:08.160919       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3463/sample-webhook-deployment-78988fc6cd-5kjch\" objectUID=d602a692-cf41-4d63-8d06-51ad59620ee3 kind=\"Pod\" propagationPolicy=Background\nW1012 18:37:08.367145       1 reconciler.go:335] Multi-Attach error for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-59-223.us-west-1.compute.internal and can't be attached to another\nI1012 18:37:08.367277       1 event.go:291] \"Event occurred\" object=\"volume-9308/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"aws-qnrx5\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI1012 18:37:09.314048       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:37:09.327653       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nE1012 18:37:09.816125       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7453/pvc-45ffs: storageclass.storage.k8s.io \"provisioning-7453\" not found\nI1012 18:37:09.816614       1 event.go:291] \"Event occurred\" object=\"provisioning-7453/pvc-45ffs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7453\\\" not found\"\nI1012 18:37:09.835122       1 garbagecollector.go:471] \"Processing object\" object=\"services-451/up-down-1-hshs9\" objectUID=6b044e0b-3cb0-461d-af7a-536fd679f637 kind=\"EndpointSlice\" virtual=false\nI1012 18:37:09.851683       1 garbagecollector.go:580] \"Deleting object\" object=\"services-451/up-down-1-hshs9\" objectUID=6b044e0b-3cb0-461d-af7a-536fd679f637 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:37:09.878762       1 pv_controller.go:879] volume \"local-gqk4w\" entered phase \"Available\"\nI1012 18:37:09.894100       1 garbagecollector.go:471] \"Processing object\" object=\"services-4417/endpoint-test2-j7f9f\" objectUID=9923e236-1f21-4a48-8eff-91463d047e91 kind=\"EndpointSlice\" virtual=false\nI1012 18:37:09.902167       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4417/endpoint-test2-j7f9f\" objectUID=9923e236-1f21-4a48-8eff-91463d047e91 kind=\"EndpointSlice\" propagationPolicy=Background\nE1012 18:37:09.992885       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:10.161155       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=3\nI1012 18:37:10.161765       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-9109/pvc-f6nhr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:10.173953       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-f6mtm\"\nI1012 18:37:10.184137       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-ktl9j\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1012 18:37:10.185768       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nI1012 18:37:10.186178       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-mgzt2\"\nE1012 18:37:10.189685       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-ktl9j\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.190642       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.192597       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nI1012 18:37:10.193064       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-9p5kf\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE1012 18:37:10.202861       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-9p5kf\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.203093       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.204514       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nI1012 18:37:10.204625       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-5m4wv\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE1012 18:37:10.210538       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-5m4wv\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.210778       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.212653       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nE1012 18:37:10.212802       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-vscpp\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.212869       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-vscpp\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1012 18:37:10.213736       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.215711       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nE1012 18:37:10.215858       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-qkwvh\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.216163       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-qkwvh\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1012 18:37:10.220716       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.222531       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nE1012 18:37:10.222729       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-rszdv\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.222662       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-rszdv\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1012 18:37:10.296423       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=3 creating=1\nI1012 18:37:10.298534       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicaSet replicaset-5816/condition-test\nE1012 18:37:10.298637       1 replica_set.go:536] sync \"replicaset-5816/condition-test\" failed with pods \"condition-test-dppbb\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1012 18:37:10.298727       1 event.go:291] \"Event occurred\" object=\"replicaset-5816/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-dppbb\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI1012 18:37:10.854886       1 recycler_client.go:89] deleting recycler pod default/recycler-for-nfs-lt4zg\nI1012 18:37:10.863598       1 pv_controller.go:1214] volume \"nfs-lt4zg\" recycled\nI1012 18:37:10.864053       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeRecycled\" message=\"Volume recycled\"\nI1012 18:37:10.877380       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Available\"\nE1012 18:37:11.378464       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-1401/default: secrets \"default-token-pf4g4\" is forbidden: unable to create new content in namespace security-context-test-1401 because it is being terminated\nI1012 18:37:11.567486       1 namespace_controller.go:185] Namespace has been deleted events-3978\nI1012 18:37:12.007426       1 namespace_controller.go:185] Namespace has been deleted volume-1627\nI1012 18:37:12.153489       1 namespace_controller.go:185] Namespace has been deleted volume-expand-9029-2193\nI1012 18:37:12.341915       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-5595d9d77b\" objectUID=f37b5ef0-19f2-4630-a92a-609da8de1f68 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:12.342123       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6338-5467/csi-mockplugin\nI1012 18:37:12.342269       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-0\" objectUID=55307c7d-c5c5-4a80-a2d0-72d750f374fc kind=\"Pod\" virtual=false\nI1012 18:37:12.347705       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-5595d9d77b\" objectUID=f37b5ef0-19f2-4630-a92a-609da8de1f68 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:12.348048       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-0\" objectUID=55307c7d-c5c5-4a80-a2d0-72d750f374fc kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:12.450746       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-attacher-56468ddd\" objectUID=4b8b73a9-c4da-4e92-8b9a-295aa63a6582 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:12.451112       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6338-5467/csi-mockplugin-attacher\nI1012 18:37:12.451405       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-attacher-0\" objectUID=2a6d7b1e-e6c5-4843-8513-9e7783025acd kind=\"Pod\" virtual=false\nI1012 18:37:12.457556       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-attacher-56468ddd\" objectUID=4b8b73a9-c4da-4e92-8b9a-295aa63a6582 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:12.460464       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6338-5467/csi-mockplugin-attacher-0\" objectUID=2a6d7b1e-e6c5-4843-8513-9e7783025acd kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:12.831960       1 pv_controller.go:930] claim \"pv-9216/pvc-lvl4p\" bound to volume \"nfs-lt4zg\"\nI1012 18:37:12.840951       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Bound\"\nI1012 18:37:12.840983       1 pv_controller.go:982] volume \"nfs-lt4zg\" bound to claim \"pv-9216/pvc-lvl4p\"\nI1012 18:37:12.846826       1 pv_controller.go:823] claim \"pv-9216/pvc-lvl4p\" entered phase \"Bound\"\nE1012 18:37:13.423079       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3463-markers/default: secrets \"default-token-565sz\" is forbidden: unable to create new content in namespace webhook-3463-markers because it is being terminated\nI1012 18:37:14.330404       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-2000/pvc-6pc6k\"\nI1012 18:37:14.335951       1 pv_controller.go:640] volume \"pvc-d82a39b2-998d-45a0-a002-96d29d347453\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:14.344932       1 pv_controller.go:879] volume \"pvc-d82a39b2-998d-45a0-a002-96d29d347453\" entered phase \"Released\"\nI1012 18:37:14.347368       1 pv_controller.go:1340] isVolumeReleased[pvc-d82a39b2-998d-45a0-a002-96d29d347453]: volume is released\nI1012 18:37:14.414542       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-2000/pvc-6pc6k\" was already processed\nI1012 18:37:14.771447       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-7417/agnhost-primary-gj9kp\" objectUID=05e8c569-12ca-4d94-b300-210e2418b934 kind=\"Pod\" virtual=false\nI1012 18:37:14.773955       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-7417/agnhost-primary-gj9kp\" objectUID=05e8c569-12ca-4d94-b300-210e2418b934 kind=\"Pod\" propagationPolicy=Background\nE1012 18:37:14.847737       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-9877/default: secrets \"default-token-7w7qq\" is forbidden: unable to create new content in namespace nettest-9877 because it is being terminated\nI1012 18:37:14.852173       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-7417/agnhost-primary-ztj5h\" objectUID=f87dfad3-89d5-45dc-8893-6aa6041fc6c7 kind=\"EndpointSlice\" virtual=false\nI1012 18:37:14.852613       1 stateful_set_control.go:521] StatefulSet statefulset-3442/ss terminating Pod ss-1 for scale down\nI1012 18:37:14.868023       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-7417/agnhost-primary-ztj5h\" objectUID=f87dfad3-89d5-45dc-8893-6aa6041fc6c7 kind=\"EndpointSlice\" propagationPolicy=Background\nI1012 18:37:14.870569       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI1012 18:37:14.933691       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6338\nI1012 18:37:15.004381       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-7982/test-recreate-deployment\"\nI1012 18:37:15.230282       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-9216/pvc-lvl4p\"\nI1012 18:37:15.236416       1 pv_controller.go:640] volume \"nfs-lt4zg\" is released and reclaim policy \"Recycle\" will be executed\nI1012 18:37:15.243759       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Released\"\nI1012 18:37:15.245869       1 pv_controller.go:1340] isVolumeReleased[nfs-lt4zg]: volume is released\nI1012 18:37:15.260944       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully assigned default/recycler-for-nfs-lt4zg to ip-172-20-37-53.us-west-1.compute.internal\"\nI1012 18:37:15.260974       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Pulling image \\\"busybox:1.27\\\"\"\nI1012 18:37:15.260989       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully pulled image \\\"busybox:1.27\\\" in 2.031239544s\"\nI1012 18:37:15.261002       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Created container pv-recycler\"\nI1012 18:37:15.261016       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Started container pv-recycler\"\nI1012 18:37:15.261041       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Warning\" reason=\"RecyclerPod\" message=\"Recycler pod: MountVolume.SetUp failed for volume \\\"kube-api-access-l5x77\\\" : object \\\"default\\\"/\\\"kube-root-ca.crt\\\" not registered\"\nI1012 18:37:15.264440       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully assigned default/recycler-for-nfs-lt4zg to ip-172-20-56-153.us-west-1.compute.internal\"\nE1012 18:37:15.534961       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:15.549833       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-5816/condition-test\" need=2 creating=1\nI1012 18:37:15.620776       1 resource_quota_controller.go:307] Resource quota has been deleted replicaset-5816/condition-test\nE1012 18:37:16.186627       1 tokens_controller.go:262] error synchronizing serviceaccount custom-resource-definition-7188/default: secrets \"default-token-tm6fq\" is forbidden: unable to create new content in namespace custom-resource-definition-7188 because it is being terminated\nI1012 18:37:16.344537       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:37:16.362478       1 stateful_set_control.go:521] StatefulSet statefulset-3442/ss terminating Pod ss-0 for scale down\nI1012 18:37:16.368126       1 event.go:291] \"Event occurred\" object=\"statefulset-3442/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI1012 18:37:16.400237       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:16.544831       1 namespace_controller.go:185] Namespace has been deleted security-context-test-1401\nI1012 18:37:16.769861       1 pv_controller.go:879] volume \"local-pvxcp5d\" entered phase \"Available\"\nI1012 18:37:16.815874       1 pv_controller.go:930] claim \"persistent-local-volumes-test-8467/pvc-p6gd4\" bound to volume \"local-pvxcp5d\"\nI1012 18:37:16.821993       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Pulling image \\\"busybox:1.27\\\"\"\nI1012 18:37:16.825986       1 pv_controller.go:879] volume \"local-pvxcp5d\" entered phase \"Bound\"\nI1012 18:37:16.826088       1 pv_controller.go:982] volume \"local-pvxcp5d\" bound to claim \"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:16.832392       1 pv_controller.go:823] claim \"persistent-local-volumes-test-8467/pvc-p6gd4\" entered phase \"Bound\"\nI1012 18:37:16.970070       1 namespace_controller.go:185] Namespace has been deleted projected-1875\nE1012 18:37:17.552167       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6338-5467/default: secrets \"default-token-kbhxn\" is forbidden: unable to create new content in namespace csi-mock-volumes-6338-5467 because it is being terminated\nE1012 18:37:18.465753       1 tokens_controller.go:262] error synchronizing serviceaccount pod-network-test-2736/default: secrets \"default-token-bzn2w\" is forbidden: unable to create new content in namespace pod-network-test-2736 because it is being terminated\nI1012 18:37:18.576422       1 namespace_controller.go:185] Namespace has been deleted webhook-3463\nI1012 18:37:18.597351       1 namespace_controller.go:185] Namespace has been deleted webhook-3463-markers\nI1012 18:37:18.731194       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:18.731382       1 event.go:291] \"Event occurred\" object=\"volume-9308/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-qnrx5\\\" \"\nI1012 18:37:18.824095       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully pulled image \\\"busybox:1.27\\\" in 2.003342187s\"\nI1012 18:37:18.925302       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Created container pv-recycler\"\nI1012 18:37:19.017537       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Started container pv-recycler\"\nE1012 18:37:19.233486       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:19.432715       1 pv_controller.go:930] claim \"provisioning-1211/pvc-dl2rw\" bound to volume \"local-rnpqr\"\nI1012 18:37:19.440419       1 pv_controller.go:879] volume \"local-rnpqr\" entered phase \"Bound\"\nI1012 18:37:19.440454       1 pv_controller.go:982] volume \"local-rnpqr\" bound to claim \"provisioning-1211/pvc-dl2rw\"\nI1012 18:37:19.448533       1 pv_controller.go:823] claim \"provisioning-1211/pvc-dl2rw\" entered phase \"Bound\"\nI1012 18:37:19.449334       1 pv_controller.go:930] claim \"provisioning-923/pvc-vrq26\" bound to volume \"local-p76jb\"\nI1012 18:37:19.459018       1 pv_controller.go:879] volume \"local-p76jb\" entered phase \"Bound\"\nI1012 18:37:19.459050       1 pv_controller.go:982] volume \"local-p76jb\" bound to claim \"provisioning-923/pvc-vrq26\"\nI1012 18:37:19.472327       1 pv_controller.go:823] claim \"provisioning-923/pvc-vrq26\" entered phase \"Bound\"\nI1012 18:37:19.472513       1 pv_controller.go:930] claim \"provisioning-7453/pvc-45ffs\" bound to volume \"local-gqk4w\"\nI1012 18:37:19.479923       1 pv_controller.go:879] volume \"local-gqk4w\" entered phase \"Bound\"\nI1012 18:37:19.480049       1 pv_controller.go:982] volume \"local-gqk4w\" bound to claim \"provisioning-7453/pvc-45ffs\"\nI1012 18:37:19.484524       1 pv_controller.go:823] claim \"provisioning-7453/pvc-45ffs\" entered phase \"Bound\"\nI1012 18:37:19.485484       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-9109/pvc-f6nhr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:19.981847       1 namespace_controller.go:185] Namespace has been deleted nettest-9877\nI1012 18:37:20.270713       1 namespace_controller.go:185] Namespace has been deleted services-4417\nI1012 18:37:20.663278       1 namespace_controller.go:185] Namespace has been deleted replicaset-5816\nI1012 18:37:20.710399       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7350-crds], removed: []\nI1012 18:37:20.711026       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-kubectl-7350-crds.kubectl.example.com\nI1012 18:37:20.711092       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1012 18:37:20.811613       1 shared_informer.go:247] Caches are synced for resource quota \nI1012 18:37:20.811636       1 resource_quota_controller.go:454] synced quota controller\nI1012 18:37:20.876002       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7350-crds], removed: []\nI1012 18:37:20.898979       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1012 18:37:20.899038       1 shared_informer.go:247] Caches are synced for garbage collector \nI1012 18:37:20.899046       1 garbagecollector.go:254] synced garbage collector\nI1012 18:37:20.970375       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7080823ea5ea33\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:20.974555       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7080823ea5ea33\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:21.286690       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-7188\nI1012 18:37:21.374432       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-3192/inline-volume-tester-bgcck\" PVC=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:37:21.374460       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:37:21.383444       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\"\nI1012 18:37:21.388361       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192/inline-volume-tester-bgcck\" objectUID=b74f82c8-98a0-467a-bcc5-baebe7366acc kind=\"Pod\" virtual=false\nI1012 18:37:21.391053       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-3192, name: inline-volume-tester-bgcck, uid: b74f82c8-98a0-467a-bcc5-baebe7366acc]\nI1012 18:37:21.391355       1 pv_controller.go:640] volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:21.399927       1 pv_controller.go:879] volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" entered phase \"Released\"\nI1012 18:37:21.402518       1 pv_controller.go:1340] isVolumeReleased[pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2]: volume is released\nI1012 18:37:21.424861       1 pv_controller_base.go:505] deletion of claim \"ephemeral-3192/inline-volume-tester-bgcck-my-volume-0\" was already processed\nI1012 18:37:21.757502       1 namespace_controller.go:185] Namespace has been deleted pods-3966\nI1012 18:37:21.902704       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:21.903088       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI1012 18:37:21.912367       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI1012 18:37:21.927987       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-6729/pvc-7xsn5\"\nI1012 18:37:21.932541       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:21.932706       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:21.941798       1 pv_controller.go:640] volume \"aws-z6jlp\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:21.947229       1 pv_controller.go:879] volume \"aws-z6jlp\" entered phase \"Released\"\nI1012 18:37:22.398320       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:22.405114       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:22.569207       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:37:22.684525       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE1012 18:37:23.157276       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-1729/default: secrets \"default-token-576ww\" is forbidden: unable to create new content in namespace emptydir-1729 because it is being terminated\nI1012 18:37:23.483866       1 event.go:291] \"Event occurred\" object=\"volume-7016-2681/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:37:23.617558       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-z6jlp\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ddfb67e367921bfa\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:23.627717       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-z6jlp\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ddfb67e367921bfa\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:23.633278       1 event.go:291] \"Event occurred\" object=\"volume-7016/csi-hostpathdztjv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-7016\\\" or manually created by system administrator\"\nI1012 18:37:23.736628       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-31f16994-596f-43bc-81ed-480ab80cbd20\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:23.736657       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.033461       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-31f16994-596f-43bc-81ed-480ab80cbd20\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.033730       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.056773       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3442/ss-696cb77d7d\" objectUID=0ad7c5ba-6d07-4432-82df-5f2de755f7ff kind=\"ControllerRevision\" virtual=false\nI1012 18:37:25.058045       1 stateful_set.go:440] StatefulSet has been deleted statefulset-3442/ss\nI1012 18:37:25.059484       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3442/ss-696cb77d7d\" objectUID=0ad7c5ba-6d07-4432-82df-5f2de755f7ff kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:25.127276       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-31f16994-596f-43bc-81ed-480ab80cbd20\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.127543       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.130017       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-003e036e-7cea-4bff-bc12-20afe5aae078\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.130225       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.171834       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-3442/datadir-ss-0\"\nI1012 18:37:25.210861       1 pv_controller.go:640] volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:25.223219       1 pv_controller.go:879] volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" entered phase \"Released\"\nI1012 18:37:25.227470       1 pv_controller.go:1340] isVolumeReleased[pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826]: volume is released\nI1012 18:37:25.231241       1 pv_controller.go:1340] isVolumeReleased[pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826]: volume is released\nI1012 18:37:25.231703       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-3442/datadir-ss-1\"\nI1012 18:37:25.239924       1 pv_controller.go:640] volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:25.242639       1 pv_controller.go:879] volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" entered phase \"Released\"\nI1012 18:37:25.244675       1 pv_controller.go:1340] isVolumeReleased[pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca]: volume is released\nI1012 18:37:25.269144       1 recycler_client.go:89] deleting recycler pod default/recycler-for-nfs-lt4zg\nI1012 18:37:25.282249       1 pv_controller.go:1214] volume \"nfs-lt4zg\" recycled\nI1012 18:37:25.282598       1 event.go:291] \"Event occurred\" object=\"nfs-lt4zg\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeRecycled\" message=\"Volume recycled\"\nI1012 18:37:25.292683       1 pv_controller.go:879] volume \"nfs-lt4zg\" entered phase \"Available\"\nI1012 18:37:25.304563       1 namespace_controller.go:185] Namespace has been deleted kubectl-7417\nI1012 18:37:25.330244       1 pv_controller.go:879] volume \"pvc-4d8d7450-baf6-478b-8b46-9f7efa59e4ef\" entered phase \"Bound\"\nI1012 18:37:25.330388       1 pv_controller.go:982] volume \"pvc-4d8d7450-baf6-478b-8b46-9f7efa59e4ef\" bound to claim \"statefulset-661/datadir-ss-0\"\nI1012 18:37:25.339333       1 pv_controller.go:823] claim \"statefulset-661/datadir-ss-0\" entered phase \"Bound\"\nI1012 18:37:25.530600       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-003e036e-7cea-4bff-bc12-20afe5aae078\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.531041       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nE1012 18:37:25.556834       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-6463/default: secrets \"default-token-jd9d9\" is forbidden: unable to create new content in namespace nettest-6463 because it is being terminated\nI1012 18:37:25.732511       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8467/pod-003e036e-7cea-4bff-bc12-20afe5aae078\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.732540       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.737596       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-8467/pvc-p6gd4\"\nI1012 18:37:25.743152       1 pv_controller.go:640] volume \"local-pvxcp5d\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:25.747005       1 pv_controller.go:879] volume \"local-pvxcp5d\" entered phase \"Released\"\nI1012 18:37:25.750972       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-8467/pvc-p6gd4\" was already processed\nE1012 18:37:25.809948       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-3222/default: secrets \"default-token-k2vbk\" is forbidden: unable to create new content in namespace container-probe-3222 because it is being terminated\nI1012 18:37:25.940471       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-4d8d7450-baf6-478b-8b46-9f7efa59e4ef\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c62b5d5d13a2b506\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:26.291357       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-5353/agnhost-primary\" need=1 creating=1\nI1012 18:37:26.300605       1 event.go:291] \"Event occurred\" object=\"kubectl-5353/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-5f4jq\"\nE1012 18:37:26.850319       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:37:26.988382       1 tokens_controller.go:262] error synchronizing serviceaccount services-2223/default: secrets \"default-token-qklfq\" is forbidden: unable to create new content in namespace services-2223 because it is being terminated\nI1012 18:37:27.802276       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6338-5467\nI1012 18:37:27.897391       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-710b2517-adfd-4509-9ecd-ad9dcc3e4bca\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7080823ea5ea33\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:37:28.235139       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-3192/default: secrets \"default-token-72kh2\" is forbidden: unable to create new content in namespace ephemeral-3192 because it is being terminated\nI1012 18:37:28.262549       1 namespace_controller.go:185] Namespace has been deleted emptydir-1729\nI1012 18:37:28.392704       1 pv_controller_base.go:505] deletion of claim \"statefulset-3442/datadir-ss-1\" was already processed\nE1012 18:37:29.186403       1 tokens_controller.go:262] error synchronizing serviceaccount pods-7428/default: secrets \"default-token-z5tqn\" is forbidden: unable to create new content in namespace pods-7428 because it is being terminated\nI1012 18:37:29.211137       1 pv_controller.go:879] volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" entered phase \"Bound\"\nI1012 18:37:29.211174       1 pv_controller.go:982] volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" bound to claim \"volume-7016/csi-hostpathdztjv\"\nI1012 18:37:29.221869       1 pv_controller.go:823] claim \"volume-7016/csi-hostpathdztjv\" entered phase \"Bound\"\nI1012 18:37:29.988128       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7016^7435998f-2b8b-11ec-91f3-b61a09aaa00c\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:30.280371       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ba7e76cc0ce50e83\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:30.306454       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" need=3 creating=3\nI1012 18:37:30.307219       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-shared-map-item-removal-55649fd747 to 3\"\nI1012 18:37:30.316210       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-1087/deployment-shared-map-item-removal\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-shared-map-item-removal\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI1012 18:37:30.318209       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-7g7rx\"\nI1012 18:37:30.330004       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-grcb5\"\nI1012 18:37:30.338275       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-fsfwf\"\nI1012 18:37:30.441087       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-1688\nI1012 18:37:30.508600       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2000-9318/csi-mockplugin-85b896745b\" objectUID=09a8b97a-2680-4c65-9830-976d74229315 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:30.509194       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-2000-9318/csi-mockplugin\nI1012 18:37:30.509374       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2000-9318/csi-mockplugin-0\" objectUID=e33a9140-52be-4de2-a548-c403508fd553 kind=\"Pod\" virtual=false\nI1012 18:37:30.513853       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2000-9318/csi-mockplugin-85b896745b\" objectUID=09a8b97a-2680-4c65-9830-976d74229315 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:30.514247       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2000-9318/csi-mockplugin-0\" objectUID=e33a9140-52be-4de2-a548-c403508fd553 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:30.562397       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7016^7435998f-2b8b-11ec-91f3-b61a09aaa00c\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:30.562618       1 event.go:291] \"Event occurred\" object=\"volume-7016/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\\\" \"\nE1012 18:37:30.583016       1 tokens_controller.go:262] error synchronizing serviceaccount clientset-4661/default: serviceaccounts \"default\" not found\nI1012 18:37:31.011073       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^315c9144-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:31.015864       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^315c9144-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:31.140904       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-3192-2886/csi-hostpathplugin\nI1012 18:37:31.140912       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192-2886/csi-hostpathplugin-b4d5d8584\" objectUID=1a027e06-930f-4c0f-85cf-be0f837f5048 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:31.141202       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-3192-2886/csi-hostpathplugin-0\" objectUID=52e1b88c-fd02-45ed-89fc-08686f101a09 kind=\"Pod\" virtual=false\nI1012 18:37:31.144566       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-3192-2886/csi-hostpathplugin-0\" objectUID=52e1b88c-fd02-45ed-89fc-08686f101a09 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:31.144701       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-3192-2886/csi-hostpathplugin-b4d5d8584\" objectUID=1a027e06-930f-4c0f-85cf-be0f837f5048 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:31.149227       1 namespace_controller.go:185] Namespace has been deleted container-probe-3222\nE1012 18:37:31.486767       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:31.562266       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e346ea1b-f300-4d0f-b9db-519c7c2700b2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-3192^315c9144-2b8b-11ec-a3ea-2efa9c825458\") on node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:37:32.068699       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-1374/default: secrets \"default-token-kk9n4\" is forbidden: unable to create new content in namespace svcaccounts-1374 because it is being terminated\nI1012 18:37:32.113759       1 namespace_controller.go:185] Namespace has been deleted services-2223\nE1012 18:37:32.418024       1 pv_protection_controller.go:118] PV pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-b7e70ef8-3f50-4efc-b4c8-55b9a39dc826\": the object has been modified; please apply your changes to the latest version and try again\nI1012 18:37:32.421090       1 pv_controller_base.go:505] deletion of claim \"statefulset-3442/datadir-ss-0\" was already processed\nI1012 18:37:32.473058       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1211/pvc-dl2rw\"\nI1012 18:37:32.479325       1 pv_controller.go:640] volume \"local-rnpqr\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:32.482914       1 pv_controller.go:879] volume \"local-rnpqr\" entered phase \"Released\"\nI1012 18:37:32.528774       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-shared-map-item-removal-55649fd747 to 4\"\nI1012 18:37:32.529200       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" need=4 creating=1\nI1012 18:37:32.530235       1 pv_controller_base.go:505] deletion of claim \"provisioning-1211/pvc-dl2rw\" was already processed\nI1012 18:37:32.538185       1 event.go:291] \"Event occurred\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-shared-map-item-removal-55649fd747-gs8pf\"\nI1012 18:37:32.833927       1 garbagecollector.go:471] \"Processing object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" objectUID=87fdafb3-aa57-4d4a-97ac-ede44909abde kind=\"ReplicaSet\" virtual=false\nI1012 18:37:32.834159       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-1087/deployment-shared-map-item-removal\"\nI1012 18:37:32.836305       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747\" objectUID=87fdafb3-aa57-4d4a-97ac-ede44909abde kind=\"ReplicaSet\" propagationPolicy=Background\nI1012 18:37:32.838516       1 garbagecollector.go:471] \"Processing object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-fsfwf\" objectUID=8aaa54a0-cfc9-48ad-8df9-904d548e0399 kind=\"Pod\" virtual=false\nI1012 18:37:32.838685       1 garbagecollector.go:471] \"Processing object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-gs8pf\" objectUID=1e056803-0005-4c14-a1c1-c7bff71005a1 kind=\"Pod\" virtual=false\nI1012 18:37:32.838874       1 garbagecollector.go:471] \"Processing object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-7g7rx\" objectUID=a3d02e2e-956e-4c17-950b-ec8597e275e6 kind=\"Pod\" virtual=false\nI1012 18:37:32.838929       1 garbagecollector.go:471] \"Processing object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-grcb5\" objectUID=388697b8-d3d3-474e-b97b-fcf1acdaa3a5 kind=\"Pod\" virtual=false\nI1012 18:37:32.841859       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-gs8pf\" objectUID=1e056803-0005-4c14-a1c1-c7bff71005a1 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:32.841974       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-7g7rx\" objectUID=a3d02e2e-956e-4c17-950b-ec8597e275e6 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:32.842176       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-fsfwf\" objectUID=8aaa54a0-cfc9-48ad-8df9-904d548e0399 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:32.842371       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-1087/deployment-shared-map-item-removal-55649fd747-grcb5\" objectUID=388697b8-d3d3-474e-b97b-fcf1acdaa3a5 kind=\"Pod\" propagationPolicy=Background\nE1012 18:37:32.845924       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:37:32.936890       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:37:32.996981       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:33.123036       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2000\nI1012 18:37:33.297528       1 namespace_controller.go:185] Namespace has been deleted ephemeral-3192\nE1012 18:37:33.626246       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:33.740915       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-4d8d7450-baf6-478b-8b46-9f7efa59e4ef\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c62b5d5d13a2b506\") from node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:33.741095       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-4d8d7450-baf6-478b-8b46-9f7efa59e4ef\\\" \"\nI1012 18:37:33.895764       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-2736\nI1012 18:37:34.302899       1 namespace_controller.go:185] Namespace has been deleted pods-7428\nI1012 18:37:34.346117       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2954/pvc-fvczh\"\nI1012 18:37:34.352704       1 pv_controller.go:640] volume \"local-9p9gp\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:34.356499       1 pv_controller.go:879] volume \"local-9p9gp\" entered phase \"Released\"\nI1012 18:37:34.403763       1 pv_controller_base.go:505] deletion of claim \"provisioning-2954/pvc-fvczh\" was already processed\nI1012 18:37:34.431436       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-923/pvc-vrq26\"\nI1012 18:37:34.433493       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-9109/pvc-f6nhr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:34.438833       1 pv_controller.go:640] volume \"local-p76jb\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:34.442334       1 pv_controller.go:879] volume \"local-p76jb\" entered phase \"Released\"\nI1012 18:37:34.483931       1 pv_controller_base.go:505] deletion of claim \"provisioning-923/pvc-vrq26\" was already processed\nE1012 18:37:35.004867       1 tokens_controller.go:262] error synchronizing serviceaccount pv-9216/default: secrets \"default-token-f6nw8\" is forbidden: unable to create new content in namespace pv-9216 because it is being terminated\nI1012 18:37:35.538251       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8467\nE1012 18:37:35.539165       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:35.631467       1 namespace_controller.go:185] Namespace has been deleted clientset-4661\nW1012 18:37:35.863314       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-a5179113-cf83-4305-95e3-1a9decb579c5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-9824^720acf48-2b8a-11ec-870b-e23f9e021548\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI1012 18:37:36.282194       1 pv_controller_base.go:505] deletion of claim \"volume-6729/pvc-7xsn5\" was already processed\nI1012 18:37:36.381365       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-018fcedeba91e1c8b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:36.387799       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-018fcedeba91e1c8b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:36.396762       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslicemirroring-8930/example-custom-endpoints-ghjxd\" objectUID=f9f1db82-1bf9-46da-839d-790722cca1f8 kind=\"EndpointSlice\" virtual=false\nI1012 18:37:36.453078       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-a5179113-cf83-4305-95e3-1a9decb579c5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-9824^720acf48-2b8a-11ec-870b-e23f9e021548\") on node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:37:36.655115       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-z6jlp\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0ddfb67e367921bfa\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:37:36.915094       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9298-3391/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:37:37.006791       1 event.go:291] \"Event occurred\" object=\"provisioning-9973/awsmlxsr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:37.118489       1 event.go:291] \"Event occurred\" object=\"provisioning-9973/awsmlxsr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:37.160129       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-1374\nI1012 18:37:37.580210       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6542/pvc-jcfnd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:37.610774       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-6173/awsvsccv\"\nI1012 18:37:37.615868       1 pv_controller.go:640] volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:37.619476       1 pv_controller.go:879] volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" entered phase \"Released\"\nI1012 18:37:37.621151       1 pv_controller.go:1340] isVolumeReleased[pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e]: volume is released\nI1012 18:37:38.006482       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6542/pvc-jcfnd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:38.010404       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6542/pvc-jcfnd\"\nE1012 18:37:38.596751       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-7999/default: secrets \"default-token-z96jn\" is forbidden: unable to create new content in namespace configmap-7999 because it is being terminated\nE1012 18:37:39.224090       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-8817/default: secrets \"default-token-mlbbw\" is forbidden: unable to create new content in namespace container-probe-8817 because it is being terminated\nE1012 18:37:40.075537       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2954/default: secrets \"default-token-fc5fc\" is forbidden: unable to create new content in namespace provisioning-2954 because it is being terminated\nI1012 18:37:40.121119       1 namespace_controller.go:185] Namespace has been deleted pv-9216\nE1012 18:37:40.275303       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-923/default: secrets \"default-token-22ql5\" is forbidden: unable to create new content in namespace provisioning-923 because it is being terminated\nE1012 18:37:40.281517       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:40.482850       1 pv_controller.go:879] volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" entered phase \"Bound\"\nI1012 18:37:40.482899       1 pv_controller.go:982] volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" bound to claim \"provisioning-9973/awsmlxsr\"\nI1012 18:37:40.494338       1 pv_controller.go:823] claim \"provisioning-9973/awsmlxsr\" entered phase \"Bound\"\nE1012 18:37:40.630155       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-3442/default: secrets \"default-token-kdvp2\" is forbidden: unable to create new content in namespace statefulset-3442 because it is being terminated\nI1012 18:37:40.805565       1 namespace_controller.go:185] Namespace has been deleted kubectl-4077\nI1012 18:37:40.934405       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-5353/agnhost-primary-5f4jq\" objectUID=d998ab09-bc5e-4f16-9614-de314e28e100 kind=\"Pod\" virtual=false\nI1012 18:37:40.938671       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-5353/agnhost-primary-5f4jq\" objectUID=d998ab09-bc5e-4f16-9614-de314e28e100 kind=\"Pod\" propagationPolicy=Background\nE1012 18:37:41.091463       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-5353/default: secrets \"default-token-kp4tj\" is forbidden: unable to create new content in namespace kubectl-5353 because it is being terminated\nI1012 18:37:41.131377       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-055d701a3bde092c7\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:41.477640       1 namespace_controller.go:185] Namespace has been deleted ephemeral-3192-2886\nE1012 18:37:41.618267       1 tokens_controller.go:262] error synchronizing serviceaccount endpointslicemirroring-8930/default: secrets \"default-token-j4w7g\" is forbidden: unable to create new content in namespace endpointslicemirroring-8930 because it is being terminated\nE1012 18:37:42.979639       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-3807/default: serviceaccounts \"default\" not found\nE1012 18:37:43.020129       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4356/default: secrets \"default-token-cl8xv\" is forbidden: unable to create new content in namespace kubectl-4356 because it is being terminated\nI1012 18:37:43.184324       1 namespace_controller.go:185] Namespace has been deleted volume-6729\nI1012 18:37:43.197481       1 pv_controller.go:1340] isVolumeReleased[pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e]: volume is released\nE1012 18:37:43.259262       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6542/default: secrets \"default-token-7ngrr\" is forbidden: unable to create new content in namespace csi-mock-volumes-6542 because it is being terminated\nI1012 18:37:43.351576       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2d02b865-4e44-42b7-b9eb-9f591b402a3e\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-018fcedeba91e1c8b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:43.456915       1 event.go:291] \"Event occurred\" object=\"volume-9264-670/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:37:43.543722       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-055d701a3bde092c7\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:43.544130       1 event.go:291] \"Event occurred\" object=\"provisioning-9973/pod-subpath-test-dynamicpv-46h7\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\\\" \"\nI1012 18:37:43.582592       1 namespace_controller.go:185] Namespace has been deleted provisioning-1211\nI1012 18:37:43.606811       1 event.go:291] \"Event occurred\" object=\"volume-9264/csi-hostpath8gtqq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-9264\\\" or manually created by system administrator\"\nI1012 18:37:43.656411       1 namespace_controller.go:185] Namespace has been deleted configmap-7999\nI1012 18:37:43.933136       1 namespace_controller.go:185] Namespace has been deleted events-8115\nI1012 18:37:44.067703       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:37:44.067998       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI1012 18:37:44.083706       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI1012 18:37:44.102774       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:44.103446       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:37:44.272552       1 namespace_controller.go:185] Namespace has been deleted container-probe-8817\nI1012 18:37:44.767513       1 namespace_controller.go:185] Namespace has been deleted container-probe-5402\nI1012 18:37:44.781923       1 pv_controller_base.go:505] deletion of claim \"volume-6173/awsvsccv\" was already processed\nE1012 18:37:44.875417       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7134/pvc-plkdg: storageclass.storage.k8s.io \"provisioning-7134\" not found\nI1012 18:37:44.875744       1 event.go:291] \"Event occurred\" object=\"provisioning-7134/pvc-plkdg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7134\\\" not found\"\nI1012 18:37:44.929748       1 pv_controller.go:879] volume \"local-dn2gq\" entered phase \"Available\"\nI1012 18:37:44.998405       1 pv_controller.go:879] volume \"local-pvbrlx7\" entered phase \"Available\"\nI1012 18:37:45.047886       1 pv_controller.go:930] claim \"persistent-local-volumes-test-8319/pvc-hss9b\" bound to volume \"local-pvbrlx7\"\nI1012 18:37:45.061342       1 pv_controller.go:879] volume \"local-pvbrlx7\" entered phase \"Bound\"\nI1012 18:37:45.061449       1 pv_controller.go:982] volume \"local-pvbrlx7\" bound to claim \"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:45.068400       1 pv_controller.go:823] claim \"persistent-local-volumes-test-8319/pvc-hss9b\" entered phase \"Bound\"\nI1012 18:37:45.203190       1 namespace_controller.go:185] Namespace has been deleted provisioning-2954\nI1012 18:37:45.318758       1 namespace_controller.go:185] Namespace has been deleted provisioning-923\nI1012 18:37:45.493033       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7453/pvc-45ffs\"\nI1012 18:37:45.501777       1 pv_controller.go:640] volume \"local-gqk4w\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:45.505314       1 pv_controller.go:879] volume \"local-gqk4w\" entered phase \"Released\"\nI1012 18:37:45.548041       1 pv_controller_base.go:505] deletion of claim \"provisioning-7453/pvc-45ffs\" was already processed\nI1012 18:37:45.670748       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6542-2157/csi-mockplugin\nI1012 18:37:45.670694       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-7756985bc6\" objectUID=ed3d6552-46e7-48ba-8f43-a9a32d054dd6 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:45.671513       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-0\" objectUID=32f089a2-f97b-4cb8-aac8-a3965b73d9d0 kind=\"Pod\" virtual=false\nI1012 18:37:45.675071       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-7756985bc6\" objectUID=ed3d6552-46e7-48ba-8f43-a9a32d054dd6 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:45.676195       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-0\" objectUID=32f089a2-f97b-4cb8-aac8-a3965b73d9d0 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:45.717705       1 namespace_controller.go:185] Namespace has been deleted volume-9288\nI1012 18:37:45.771004       1 namespace_controller.go:185] Namespace has been deleted statefulset-3442\nE1012 18:37:45.779840       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-4299/default: secrets \"default-token-27f8f\" is forbidden: unable to create new content in namespace downward-api-4299 because it is being terminated\nI1012 18:37:45.783096       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-6542-2157/csi-mockplugin-attacher\nI1012 18:37:45.783102       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-attacher-74f798fd96\" objectUID=6c6ff063-0703-4c6a-acfc-c88136e3fa11 kind=\"ControllerRevision\" virtual=false\nI1012 18:37:45.783149       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-attacher-0\" objectUID=fb3a6f32-71ee-4a9b-a9c9-78b40702f2d5 kind=\"Pod\" virtual=false\nI1012 18:37:45.785744       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-attacher-74f798fd96\" objectUID=6c6ff063-0703-4c6a-acfc-c88136e3fa11 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:37:45.786683       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6542-2157/csi-mockplugin-attacher-0\" objectUID=fb3a6f32-71ee-4a9b-a9c9-78b40702f2d5 kind=\"Pod\" propagationPolicy=Background\nI1012 18:37:45.950516       1 namespace_controller.go:185] Namespace has been deleted certificates-7461\nI1012 18:37:46.708055       1 namespace_controller.go:185] Namespace has been deleted endpointslicemirroring-8930\nI1012 18:37:46.900530       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9298/pvc-dw4vl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9298\\\" or manually created by system administrator\"\nI1012 18:37:47.024408       1 pv_controller.go:879] volume \"pvc-82d70f86-abc1-4f0d-ace8-9e5a4d6f0cf5\" entered phase \"Bound\"\nI1012 18:37:47.024699       1 pv_controller.go:982] volume \"pvc-82d70f86-abc1-4f0d-ace8-9e5a4d6f0cf5\" bound to claim \"csi-mock-volumes-9298/pvc-dw4vl\"\nI1012 18:37:47.033162       1 pv_controller.go:823] claim \"csi-mock-volumes-9298/pvc-dw4vl\" entered phase \"Bound\"\nI1012 18:37:47.505207       1 pv_controller.go:879] volume \"pvc-a0239118-f1bd-4bb0-9cbf-98731914214c\" entered phase \"Bound\"\nI1012 18:37:47.505314       1 pv_controller.go:982] volume \"pvc-a0239118-f1bd-4bb0-9cbf-98731914214c\" bound to claim \"statefulset-661/datadir-ss-1\"\nI1012 18:37:47.513314       1 pv_controller.go:823] claim \"statefulset-661/datadir-ss-1\" entered phase \"Bound\"\nI1012 18:37:48.001257       1 namespace_controller.go:185] Namespace has been deleted pods-2011\nI1012 18:37:48.110941       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a0239118-f1bd-4bb0-9cbf-98731914214c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-064d341a76a384181\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:48.146522       1 namespace_controller.go:185] Namespace has been deleted secrets-3807\nI1012 18:37:48.233369       1 namespace_controller.go:185] Namespace has been deleted kubectl-4356\nI1012 18:37:48.314621       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6542\nE1012 18:37:48.330937       1 tokens_controller.go:262] error synchronizing serviceaccount pods-1467/default: secrets \"default-token-6bgfw\" is forbidden: unable to create new content in namespace pods-1467 because it is being terminated\nI1012 18:37:48.634829       1 pv_controller.go:879] volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" entered phase \"Bound\"\nI1012 18:37:48.634864       1 pv_controller.go:982] volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" bound to claim \"volume-9264/csi-hostpath8gtqq\"\nI1012 18:37:48.642920       1 pv_controller.go:823] claim \"volume-9264/csi-hostpath8gtqq\" entered phase \"Bound\"\nI1012 18:37:48.896583       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8319/pod-cd1a94d1-8823-4ffa-94e5-ee22ffb57dd6\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:48.896609       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:49.433753       1 pv_controller.go:930] claim \"provisioning-7134/pvc-plkdg\" bound to volume \"local-dn2gq\"\nI1012 18:37:49.446435       1 pv_controller.go:879] volume \"local-dn2gq\" entered phase \"Bound\"\nI1012 18:37:49.446631       1 pv_controller.go:982] volume \"local-dn2gq\" bound to claim \"provisioning-7134/pvc-plkdg\"\nI1012 18:37:49.453723       1 pv_controller.go:823] claim \"provisioning-7134/pvc-plkdg\" entered phase \"Bound\"\nI1012 18:37:49.454190       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-9109/pvc-f6nhr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE1012 18:37:49.936995       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-8286/inline-volume-ckxvq-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1012 18:37:49.937271       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286/inline-volume-ckxvq-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1012 18:37:50.023449       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:50.084394       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8286, name: inline-volume-ckxvq, uid: e75e4826-583b-4ede-b9e8-35c3ec81af5f] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:37:50.084677       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-ckxvq\" objectUID=e75e4826-583b-4ede-b9e8-35c3ec81af5f kind=\"Pod\" virtual=false\nI1012 18:37:50.084901       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-ckxvq-my-volume\" objectUID=08b32303-2535-4157-9218-c4a62ed4b29a kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:37:50.087113       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8286, name: inline-volume-ckxvq-my-volume, uid: 08b32303-2535-4157-9218-c4a62ed4b29a] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8286, name: inline-volume-ckxvq, uid: e75e4826-583b-4ede-b9e8-35c3ec81af5f] is deletingDependents\nI1012 18:37:50.088748       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8286/inline-volume-ckxvq-my-volume\" objectUID=08b32303-2535-4157-9218-c4a62ed4b29a kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE1012 18:37:50.095351       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-8286/inline-volume-ckxvq-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI1012 18:37:50.095795       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286/inline-volume-ckxvq-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI1012 18:37:50.095894       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-ckxvq-my-volume\" objectUID=08b32303-2535-4157-9218-c4a62ed4b29a kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:37:50.097867       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-8286/inline-volume-ckxvq-my-volume\"\nI1012 18:37:50.102945       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-ckxvq\" objectUID=e75e4826-583b-4ede-b9e8-35c3ec81af5f kind=\"Pod\" virtual=false\nI1012 18:37:50.104593       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8286, name: inline-volume-ckxvq, uid: e75e4826-583b-4ede-b9e8-35c3ec81af5f]\nI1012 18:37:50.172335       1 namespace_controller.go:185] Namespace has been deleted emptydir-9825\nI1012 18:37:50.228346       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8319/pod-cd1a94d1-8823-4ffa-94e5-ee22ffb57dd6\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:50.228372       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:50.232847       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-8319/pod-cd1a94d1-8823-4ffa-94e5-ee22ffb57dd6\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:50.233207       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:50.237404       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-8319/pvc-hss9b\"\nI1012 18:37:50.243518       1 pv_controller.go:640] volume \"local-pvbrlx7\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:50.247108       1 pv_controller.go:879] volume \"local-pvbrlx7\" entered phase \"Released\"\nI1012 18:37:50.251016       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-8319/pvc-hss9b\" was already processed\nI1012 18:37:50.470581       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-a0239118-f1bd-4bb0-9cbf-98731914214c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-064d341a76a384181\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nI1012 18:37:50.470766       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a0239118-f1bd-4bb0-9cbf-98731914214c\\\" \"\nI1012 18:37:50.552712       1 event.go:291] \"Event occurred\" object=\"volume-9264/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\\\" \"\nI1012 18:37:50.552664       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:50.823848       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7350-crds]\nI1012 18:37:50.824061       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI1012 18:37:50.824126       1 shared_informer.go:247] Caches are synced for resource quota \nI1012 18:37:50.824138       1 resource_quota_controller.go:454] synced quota controller\nI1012 18:37:50.879994       1 namespace_controller.go:185] Namespace has been deleted downward-api-4299\nI1012 18:37:50.910193       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [kubectl.example.com/v1, Resource=e2e-test-kubectl-7350-crds]\nI1012 18:37:50.910356       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI1012 18:37:50.910448       1 shared_informer.go:247] Caches are synced for garbage collector \nI1012 18:37:50.910503       1 garbagecollector.go:254] synced garbage collector\nE1012 18:37:50.940835       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6542-2157/default: secrets \"default-token-q7hss\" is forbidden: unable to create new content in namespace csi-mock-volumes-6542-2157 because it is being terminated\nE1012 18:37:52.218069       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-5106/default: secrets \"default-token-frt6f\" is forbidden: unable to create new content in namespace emptydir-5106 because it is being terminated\nE1012 18:37:52.497125       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:52.507390       1 pv_controller.go:879] volume \"local-pv6m6wk\" entered phase \"Available\"\nI1012 18:37:52.554843       1 pv_controller.go:930] claim \"persistent-local-volumes-test-989/pvc-zbhd9\" bound to volume \"local-pv6m6wk\"\nI1012 18:37:52.562265       1 pv_controller.go:879] volume \"local-pv6m6wk\" entered phase \"Bound\"\nI1012 18:37:52.562503       1 pv_controller.go:982] volume \"local-pv6m6wk\" bound to claim \"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:37:52.570328       1 pv_controller.go:823] claim \"persistent-local-volumes-test-989/pvc-zbhd9\" entered phase \"Bound\"\nI1012 18:37:52.641583       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286-7585/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI1012 18:37:52.793846       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8286\\\" or manually created by system administrator\"\nI1012 18:37:52.794782       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8286\\\" or manually created by system administrator\"\nE1012 18:37:52.992840       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:53.036773       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-451/up-down-3\" need=3 creating=3\nI1012 18:37:53.044610       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-3-lwjxf\"\nI1012 18:37:53.065177       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-3-q4w5x\"\nI1012 18:37:53.071930       1 event.go:291] \"Event occurred\" object=\"services-451/up-down-3\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: up-down-3-s7rrj\"\nE1012 18:37:53.105685       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:37:53.139183       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6173/default: secrets \"default-token-z4x4x\" is forbidden: unable to create new content in namespace volume-6173 because it is being terminated\nI1012 18:37:53.409346       1 namespace_controller.go:185] Namespace has been deleted pods-1467\nI1012 18:37:53.757339       1 namespace_controller.go:185] Namespace has been deleted metadata-concealment-7658\nI1012 18:37:54.026624       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9973/awsmlxsr\"\nI1012 18:37:54.033470       1 pv_controller.go:640] volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:54.036094       1 pv_controller.go:879] volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" entered phase \"Released\"\nI1012 18:37:54.039186       1 pv_controller.go:1340] isVolumeReleased[pvc-db33b6a5-8108-4203-a36b-c99e35256a39]: volume is released\nI1012 18:37:54.385490       1 pv_controller.go:879] volume \"pvc-c6462a67-f5a6-4e82-9b77-3633a2f37b16\" entered phase \"Bound\"\nI1012 18:37:54.385761       1 pv_controller.go:982] volume \"pvc-c6462a67-f5a6-4e82-9b77-3633a2f37b16\" bound to claim \"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\"\nI1012 18:37:54.392807       1 pv_controller.go:823] claim \"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" entered phase \"Bound\"\nE1012 18:37:54.823397       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:37:55.038587       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-9195/pvc-2pqmq: storageclass.storage.k8s.io \"provisioning-9195\" not found\nI1012 18:37:55.038780       1 event.go:291] \"Event occurred\" object=\"provisioning-9195/pvc-2pqmq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9195\\\" not found\"\nE1012 18:37:55.081710       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8223/pvc-zlqw9: storageclass.storage.k8s.io \"provisioning-8223\" not found\nI1012 18:37:55.081744       1 event.go:291] \"Event occurred\" object=\"provisioning-8223/pvc-zlqw9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8223\\\" not found\"\nI1012 18:37:55.094247       1 pv_controller.go:879] volume \"local-svnjh\" entered phase \"Available\"\nI1012 18:37:55.137971       1 pv_controller.go:879] volume \"local-cqncx\" entered phase \"Available\"\nE1012 18:37:55.230337       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:37:55.445704       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-9298/pvc-dw4vl\"\nI1012 18:37:55.451634       1 pv_controller.go:640] volume \"pvc-82d70f86-abc1-4f0d-ace8-9e5a4d6f0cf5\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:37:55.455945       1 pv_controller.go:879] volume \"pvc-82d70f86-abc1-4f0d-ace8-9e5a4d6f0cf5\" entered phase \"Released\"\nI1012 18:37:55.458996       1 pv_controller.go:1340] isVolumeReleased[pvc-82d70f86-abc1-4f0d-ace8-9e5a4d6f0cf5]: volume is released\nI1012 18:37:55.515501       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-9298/pvc-dw4vl\" was already processed\nI1012 18:37:55.871593       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7134/pvc-plkdg\"\nI1012 18:37:55.876608       1 pv_controller.go:640] volume \"local-dn2gq\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:55.879071       1 pv_controller.go:879] volume \"local-dn2gq\" entered phase \"Released\"\nI1012 18:37:55.928407       1 pv_controller_base.go:505] deletion of claim \"provisioning-7134/pvc-plkdg\" was already processed\nI1012 18:37:56.024877       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6542-2157\nI1012 18:37:56.103679       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c6462a67-f5a6-4e82-9b77-3633a2f37b16\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8286^8336dd82-2b8b-11ec-8965-ae4e8a9d2055\") from node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:37:56.258207       1 namespace_controller.go:185] Namespace has been deleted provisioning-7453\nI1012 18:37:56.315359       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:56.320218       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:37:56.420794       1 namespace_controller.go:185] Namespace has been deleted security-context-4034\nI1012 18:37:56.650947       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c6462a67-f5a6-4e82-9b77-3633a2f37b16\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8286^8336dd82-2b8b-11ec-8965-ae4e8a9d2055\") from node \"ip-172-20-59-223.us-west-1.compute.internal\" \nI1012 18:37:56.651209       1 event.go:291] \"Event occurred\" object=\"ephemeral-8286/inline-volume-tester-t97jx\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c6462a67-f5a6-4e82-9b77-3633a2f37b16\\\" \"\nI1012 18:37:56.728324       1 namespace_controller.go:185] Namespace has been deleted provisioning-4917\nI1012 18:37:56.911043       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-9308/pvc-kcgbc\"\nI1012 18:37:56.922718       1 pv_controller.go:640] volume \"aws-qnrx5\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:37:56.926083       1 pv_controller.go:879] volume \"aws-qnrx5\" entered phase \"Released\"\nI1012 18:37:57.092371       1 namespace_controller.go:185] Namespace has been deleted nettest-6463\nI1012 18:37:57.287325       1 namespace_controller.go:185] Namespace has been deleted emptydir-5106\nI1012 18:37:57.307662       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-7995/rs\" need=10 creating=10\nI1012 18:37:57.313722       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-64dmt\"\nI1012 18:37:57.337108       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-sc5sg\"\nI1012 18:37:57.337136       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-z7pjr\"\nI1012 18:37:57.354711       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-l8hlz\"\nI1012 18:37:57.354974       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-tmx6k\"\nI1012 18:37:57.355186       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-8k88n\"\nI1012 18:37:57.362741       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-4r7f8\"\nI1012 18:37:57.374703       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-qjfvw\"\nI1012 18:37:57.381102       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-6k446\"\nI1012 18:37:57.385381       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-qq47n\"\nI1012 18:37:58.166302       1 namespace_controller.go:185] Namespace has been deleted volume-6173\nI1012 18:37:59.669716       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8319\nI1012 18:38:00.111366       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nI1012 18:38:00.111868       1 event.go:291] \"Event occurred\" object=\"cronjob-7621/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27234398\"\nI1012 18:38:00.119752       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-7621/concurrent\" resourceVersion=\"39218\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE1012 18:38:00.119776       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-7621/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI1012 18:38:00.122587       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nI1012 18:38:00.127982       1 event.go:291] \"Event occurred\" object=\"cronjob-7621/concurrent-27234398\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27234398--1-6xxm5\"\nI1012 18:38:00.133386       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nI1012 18:38:00.138298       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nI1012 18:38:00.427642       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nE1012 18:38:01.084994       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5185/default: secrets \"default-token-mn7hq\" is forbidden: unable to create new content in namespace projected-5185 because it is being terminated\nI1012 18:38:01.426628       1 job_controller.go:406] enqueueing job cronjob-7621/concurrent-27234398\nE1012 18:38:01.498636       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7134/default: secrets \"default-token-r7pfh\" is forbidden: unable to create new content in namespace provisioning-7134 because it is being terminated\nE1012 18:38:02.062249       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-8153/default: secrets \"default-token-nddzj\" is forbidden: unable to create new content in namespace kubectl-8153 because it is being terminated\nI1012 18:38:02.485077       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-055d701a3bde092c7\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:02.487666       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-055d701a3bde092c7\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:03.099324       1 pv_controller_base.go:505] deletion of claim \"volume-9308/pvc-kcgbc\" was already processed\nE1012 18:38:03.113073       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2433/pvc-jds2t: storageclass.storage.k8s.io \"provisioning-2433\" not found\nI1012 18:38:03.113562       1 event.go:291] \"Event occurred\" object=\"provisioning-2433/pvc-jds2t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2433\\\" not found\"\nI1012 18:38:03.166695       1 pv_controller.go:879] volume \"local-qmzh7\" entered phase \"Available\"\nI1012 18:38:03.225409       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-qnrx5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07a70def94c9e2513\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:03.540502       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8286, name: inline-volume-tester-t97jx, uid: 4a20bf54-96df-409e-9868-8141b9dc0e0c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI1012 18:38:03.540744       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" objectUID=c6462a67-f5a6-4e82-9b77-3633a2f37b16 kind=\"PersistentVolumeClaim\" virtual=false\nI1012 18:38:03.541250       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-tester-t97jx\" objectUID=4a20bf54-96df-409e-9868-8141b9dc0e0c kind=\"Pod\" virtual=false\nI1012 18:38:03.555471       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8286, name: inline-volume-tester-t97jx-my-volume-0, uid: c6462a67-f5a6-4e82-9b77-3633a2f37b16] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8286, name: inline-volume-tester-t97jx, uid: 4a20bf54-96df-409e-9868-8141b9dc0e0c] is deletingDependents\nI1012 18:38:03.557255       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" objectUID=c6462a67-f5a6-4e82-9b77-3633a2f37b16 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI1012 18:38:03.561427       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-8286/inline-volume-tester-t97jx\" PVC=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\"\nI1012 18:38:03.561448       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\"\nI1012 18:38:03.561525       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8286/inline-volume-tester-t97jx-my-volume-0\" objectUID=c6462a67-f5a6-4e82-9b77-3633a2f37b16 kind=\"PersistentVolumeClaim\" virtual=false\nE1012 18:38:03.813890       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:03.935929       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:04.096284       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:04.231395       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:04.390825       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:04.408831       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nI1012 18:38:04.434094       1 pv_controller.go:930] claim \"provisioning-9195/pvc-2pqmq\" bound to volume \"local-svnjh\"\nI1012 18:38:04.439426       1 pv_controller.go:1340] isVolumeReleased[pvc-db33b6a5-8108-4203-a36b-c99e35256a39]: volume is released\nI1012 18:38:04.444391       1 pv_controller.go:879] volume \"local-svnjh\" entered phase \"Bound\"\nI1012 18:38:04.444582       1 pv_controller.go:982] volume \"local-svnjh\" bound to claim \"provisioning-9195/pvc-2pqmq\"\nI1012 18:38:04.454541       1 pv_controller.go:823] claim \"provisioning-9195/pvc-2pqmq\" entered phase \"Bound\"\nI1012 18:38:04.454887       1 pv_controller.go:930] claim \"provisioning-8223/pvc-zlqw9\" bound to volume \"local-cqncx\"\nI1012 18:38:04.455395       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-9109/pvc-f6nhr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:38:04.466214       1 pv_controller.go:879] volume \"local-cqncx\" entered phase \"Bound\"\nI1012 18:38:04.466260       1 pv_controller.go:982] volume \"local-cqncx\" bound to claim \"provisioning-8223/pvc-zlqw9\"\nI1012 18:38:04.477358       1 pv_controller.go:823] claim \"provisioning-8223/pvc-zlqw9\" entered phase \"Bound\"\nI1012 18:38:04.477640       1 pv_controller.go:930] claim \"provisioning-2433/pvc-jds2t\" bound to volume \"local-qmzh7\"\nI1012 18:38:04.492293       1 pv_controller.go:879] volume \"local-qmzh7\" entered phase \"Bound\"\nI1012 18:38:04.492320       1 pv_controller.go:982] volume \"local-qmzh7\" bound to claim \"provisioning-2433/pvc-jds2t\"\nI1012 18:38:04.505790       1 pv_controller.go:823] claim \"provisioning-2433/pvc-jds2t\" entered phase \"Bound\"\nE1012 18:38:04.561717       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:04.629885       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:04.707318       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:04.878184       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:04.932672       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:05.017031       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:05.232646       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:05.362309       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:05.546947       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nI1012 18:38:05.683510       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-7995/rs\" need=10 creating=1\nI1012 18:38:05.691133       1 event.go:291] \"Event occurred\" object=\"disruption-7995/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-ngq5f\"\nE1012 18:38:05.985013       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:06.100168       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nI1012 18:38:06.148016       1 namespace_controller.go:185] Namespace has been deleted projected-5185\nI1012 18:38:06.625460       1 namespace_controller.go:185] Namespace has been deleted provisioning-7134\nE1012 18:38:06.735618       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nI1012 18:38:06.862955       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-9873/test-quota\nI1012 18:38:06.932731       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-8249/my-hostname-basic-a4585f50-74d9-4cca-9b5c-60148ee0edf1\" need=1 creating=1\nI1012 18:38:06.936694       1 event.go:291] \"Event occurred\" object=\"replicaset-8249/my-hostname-basic-a4585f50-74d9-4cca-9b5c-60148ee0edf1\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-a4585f50-74d9-4cca-9b5c-60148ee0edf1-89tcc\"\nE1012 18:38:07.009696       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1012 18:38:07.065583       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-9298/default: secrets \"default-token-9j7wh\" is forbidden: unable to create new content in namespace csi-mock-volumes-9298 because it is being terminated\nI1012 18:38:07.101809       1 namespace_controller.go:185] Namespace has been deleted kubectl-8153\nI1012 18:38:07.370160       1 namespace_controller.go:185] Namespace has been deleted kubectl-5353\nE1012 18:38:07.635773       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nI1012 18:38:07.740357       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:38:07.742589       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI1012 18:38:07.752116       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI1012 18:38:07.758395       1 event.go:291] \"Event occurred\" object=\"statefulset-661/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE1012 18:38:08.132670       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nI1012 18:38:08.749471       1 namespace_controller.go:185] Namespace has been deleted projected-6162\nI1012 18:38:09.099364       1 namespace_controller.go:185] Namespace has been deleted downward-api-959\nI1012 18:38:09.301897       1 pv_controller.go:1340] isVolumeReleased[pvc-db33b6a5-8108-4203-a36b-c99e35256a39]: volume is released\nI1012 18:38:09.392484       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-db33b6a5-8108-4203-a36b-c99e35256a39\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-055d701a3bde092c7\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:09.457858       1 pv_controller_base.go:505] deletion of claim \"provisioning-9973/awsmlxsr\" was already processed\nI1012 18:38:09.513513       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9298-3391/csi-mockplugin-556dcbb486\" objectUID=5dc026a2-4343-4254-85a0-ce8a1c682a64 kind=\"ControllerRevision\" virtual=false\nI1012 18:38:09.513516       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-9298-3391/csi-mockplugin\nI1012 18:38:09.513775       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9298-3391/csi-mockplugin-0\" objectUID=c9f11b3c-4920-4e3f-9732-89bfbd2d68d4 kind=\"Pod\" virtual=false\nI1012 18:38:09.529896       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9298-3391/csi-mockplugin-556dcbb486\" objectUID=5dc026a2-4343-4254-85a0-ce8a1c682a64 kind=\"ControllerRevision\" propagationPolicy=Background\nI1012 18:38:09.533231       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9298-3391/csi-mockplugin-0\" objectUID=c9f11b3c-4920-4e3f-9732-89bfbd2d68d4 kind=\"Pod\" propagationPolicy=Background\nI1012 18:38:10.000457       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4112/frontend\"\nI1012 18:38:10.000457       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4112/agnhost-replica\"\nI1012 18:38:10.000518       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-4112/agnhost-primary\"\nI1012 18:38:10.134414       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-7016/csi-hostpathdztjv\"\nI1012 18:38:10.141206       1 pv_controller.go:640] volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" is released and reclaim policy \"Delete\" will be executed\nI1012 18:38:10.145945       1 pv_controller.go:879] volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" entered phase \"Released\"\nI1012 18:38:10.147461       1 pv_controller.go:1340] isVolumeReleased[pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d]: volume is released\nI1012 18:38:10.189331       1 pv_controller_base.go:505] deletion of claim \"volume-7016/csi-hostpathdztjv\" was already processed\nE1012 18:38:10.323379       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nE1012 18:38:10.814190       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nI1012 18:38:10.950303       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-7995/rs\" need=10 creating=1\nE1012 18:38:11.010479       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-7995/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI1012 18:38:11.117571       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-qq47n\" objectUID=ae44a2f8-f598-4e63-bff5-8f71dc93655f kind=\"Pod\" virtual=false\nI1012 18:38:11.117793       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-ngq5f\" objectUID=529a3b57-bae7-4f24-ad65-711c9dd5b0a9 kind=\"Pod\" virtual=false\nI1012 18:38:11.117898       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-64dmt\" objectUID=86470292-d9ec-47ca-b63d-abb16793b7ec kind=\"Pod\" virtual=false\nI1012 18:38:11.118037       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-qjfvw\" objectUID=6aa9c1c4-460d-43a8-ba74-6c265bd04e5e kind=\"Pod\" virtual=false\nI1012 18:38:11.118120       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-6k446\" objectUID=4280b046-8480-4bbc-a386-b9a9874a8d7c kind=\"Pod\" virtual=false\nI1012 18:38:11.118193       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-tmx6k\" objectUID=5e3f3ba6-af52-427c-a148-8fc42f067c97 kind=\"Pod\" virtual=false\nI1012 18:38:11.118264       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-4r7f8\" objectUID=703c0320-2006-4b84-9a5b-4b431b4a16fd kind=\"Pod\" virtual=false\nI1012 18:38:11.118287       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-8k88n\" objectUID=a03637ea-04f3-4a09-8a48-e0a3bdc4a0d7 kind=\"Pod\" virtual=false\nI1012 18:38:11.118346       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-sc5sg\" objectUID=378c6ea7-1f70-4645-b494-7d1a633d6b4d kind=\"Pod\" virtual=false\nI1012 18:38:11.118367       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-7995/rs-z7pjr\" objectUID=243fd778-539f-459f-b8b6-1c763991a1a8 kind=\"Pod\" virtual=false\nI1012 18:38:11.147986       1 pv_controller.go:879] volume \"pvc-9ced9036-28c0-4f8f-a7e1-c25690686810\" entered phase \"Bound\"\nI1012 18:38:11.148098       1 pv_controller.go:982] volume \"pvc-9ced9036-28c0-4f8f-a7e1-c25690686810\" bound to claim \"statefulset-661/datadir-ss-2\"\nI1012 18:38:11.154676       1 pv_controller.go:823] claim \"statefulset-661/datadir-ss-2\" entered phase \"Bound\"\nI1012 18:38:11.659342       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-830e099b-e4e3-4394-822b-7ae67b7054ea\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:11.659791       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:11.764391       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-9ced9036-28c0-4f8f-a7e1-c25690686810\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03cb6cc3182f50b11\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:11.886609       1 namespace_controller.go:185] Namespace has been deleted resourcequota-9873\nI1012 18:38:11.953537       1 event.go:291] \"Event occurred\" object=\"pvc-protection-3/pvc-protectionpb5g2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI1012 18:38:12.015536       1 event.go:291] \"Event occurred\" object=\"pvc-protection-3/pvc-protectionpb5g2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI1012 18:38:12.086078       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9298\nI1012 18:38:12.191687       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9195/pvc-2pqmq\"\nI1012 18:38:12.197957       1 pv_controller.go:640] volume \"local-svnjh\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:38:12.200447       1 pv_controller.go:879] volume \"local-svnjh\" entered phase \"Released\"\nI1012 18:38:12.246371       1 pv_controller_base.go:505] deletion of claim \"provisioning-9195/pvc-2pqmq\" was already processed\nI1012 18:38:12.476091       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:12.494426       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nE1012 18:38:12.787229       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:38:12.813463       1 pv_controller.go:879] volume \"local-pv5q2tt\" entered phase \"Available\"\nI1012 18:38:12.858270       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3366/pvc-hgd9g\" bound to volume \"local-pv5q2tt\"\nI1012 18:38:12.865050       1 pv_controller.go:879] volume \"local-pv5q2tt\" entered phase \"Bound\"\nI1012 18:38:12.865332       1 pv_controller.go:982] volume \"local-pv5q2tt\" bound to claim \"persistent-local-volumes-test-3366/pvc-hgd9g\"\nI1012 18:38:12.876902       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3366/pvc-hgd9g\" entered phase \"Bound\"\nI1012 18:38:12.928902       1 namespace_controller.go:185] Namespace has been deleted volume-9308\nI1012 18:38:13.075441       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") on node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:13.081593       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:13.649145       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-9264^7fcbdce6-2b8b-11ec-b468-d64d3f63a84b\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:13.649348       1 event.go:291] \"Event occurred\" object=\"volume-9264/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6c8b5b76-cb83-4ae9-8501-9f23110bd08f\\\" \"\nI1012 18:38:13.703494       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7016^7435998f-2b8b-11ec-91f3-b61a09aaa00c\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:38:13.707117       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7016^7435998f-2b8b-11ec-91f3-b61a09aaa00c\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:38:14.191432       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-9ced9036-28c0-4f8f-a7e1-c25690686810\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03cb6cc3182f50b11\") from node \"ip-172-20-56-153.us-west-1.compute.internal\" \nI1012 18:38:14.192027       1 event.go:291] \"Event occurred\" object=\"statefulset-661/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9ced9036-28c0-4f8f-a7e1-c25690686810\\\" \"\nI1012 18:38:14.284870       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-108a12f0-7d09-44d8-a81b-0d300e9e753d\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7016^7435998f-2b8b-11ec-91f3-b61a09aaa00c\") on node \"ip-172-20-37-53.us-west-1.compute.internal\" \nI1012 18:38:14.855340       1 event.go:291] \"Event occurred\" object=\"statefulset-5903/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI1012 18:38:15.320286       1 pv_controller.go:879] volume \"local-pv6r5xw\" entered phase \"Available\"\nI1012 18:38:15.368371       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3763/pvc-n596g\" bound to volume \"local-pv6r5xw\"\nI1012 18:38:15.375383       1 pv_controller.go:879] volume \"local-pv6r5xw\" entered phase \"Bound\"\nI1012 18:38:15.375582       1 pv_controller.go:982] volume \"local-pv6r5xw\" bound to claim \"persistent-local-volumes-test-3763/pvc-n596g\"\nI1012 18:38:15.381460       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3763/pvc-n596g\" entered phase \"Bound\"\nI1012 18:38:15.395817       1 pv_controller.go:879] volume \"pvc-3627441a-0ec0-46be-acda-2085656e0072\" entered phase \"Bound\"\nI1012 18:38:15.395850       1 pv_controller.go:982] volume \"pvc-3627441a-0ec0-46be-acda-2085656e0072\" bound to claim \"pvc-protection-3/pvc-protectionpb5g2\"\nI1012 18:38:15.402429       1 pv_controller.go:823] claim \"pvc-protection-3/pvc-protectionpb5g2\" entered phase \"Bound\"\nE1012 18:38:15.589854       1 namespace_controller.go:162] deletion of namespace apply-684 failed: unexpected items still remain in namespace: apply-684 for gvr: /v1, Resource=pods\nI1012 18:38:15.638967       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3763/pvc-n596g\"\nI1012 18:38:15.645647       1 pv_controller.go:640] volume \"local-pv6r5xw\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:38:15.649692       1 pv_controller.go:879] volume \"local-pv6r5xw\" entered phase \"Released\"\nI1012 18:38:15.696364       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-3763/pvc-n596g\" was already processed\nI1012 18:38:16.031605       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3627441a-0ec0-46be-acda-2085656e0072\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-054f30210603dd4c7\") from node \"ip-172-20-47-26.us-west-1.compute.internal\" \nE1012 18:38:16.064816       1 namespace_controller.go:162] deletion of namespace apply-1087 failed: unexpected items still remain in namespace: apply-1087 for gvr: /v1, Resource=pods\nE1012 18:38:16.669664       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1012 18:38:16.671800       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-830e099b-e4e3-4394-822b-7ae67b7054ea\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:16.671825       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:16.724327       1 namespace_controller.go:185] Namespace has been deleted container-runtime-3017\nI1012 18:38:16.871438       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-830e099b-e4e3-4394-822b-7ae67b7054ea\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:16.871469       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:16.875823       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-95cf5e77-0f81-4707-9b88-cc9fdabb6c4e\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:16.875993       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:17.272104       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-95cf5e77-0f81-4707-9b88-cc9fdabb6c4e\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:17.272165       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nE1012 18:38:17.454972       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-989/default: secrets \"default-token-rl46n\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-989 because it is being terminated\nI1012 18:38:17.460885       1 pv_controller.go:879] volume \"local-pvzc45b\" entered phase \"Available\"\nI1012 18:38:17.476534       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-989/pod-95cf5e77-0f81-4707-9b88-cc9fdabb6c4e\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:17.476955       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:17.481935       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-989/pvc-zbhd9\"\nI1012 18:38:17.489995       1 pv_controller.go:640] volume \"local-pv6m6wk\" is released and reclaim policy \"Retain\" will be executed\nI1012 18:38:17.496163       1 pv_controller.go:879] volume \"local-pv6m6wk\" entered phase \"Released\"\nI1012 18:38:17.500526       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-989/pvc-zbhd9\" was already processed\nI1012 18:38:17.509697       1 pv_controller.go:930] claim \"persistent-local-volumes-test-698/pvc-d2b7l\" bound to volume \"local-pvzc45b\"\nI1012 18:38:17.521943       1 pv_controller.go:879] volume \"local-pvzc45b\" entered phase \"Bound\"\nI1012 18:38:17.522464       1 pv_controller.go:982] volume \"local-pvzc45b\" bound to claim \"persistent-local-volumes-test-698/pvc-d2b7l\"\nI1012 18:38:17.536748       1 pv_controller.go:823] claim \"persistent-local-volumes-test-698/pvc-d2b7l\" entered phase \"Bound\"\nI1012 18:38:18.168183       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5821-1761/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI1012 18:38:18.263885       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2000-9318\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-43-113.us-west-1.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-37-53.us-west-1.compute.internal ====\nI1012 18:20:13.942945       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI1012 18:20:13.943113       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI1012 18:20:13.943121       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI1012 18:20:13.943128       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI1012 18:20:13.943134       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI1012 18:20:13.943139       1 flags.go:59] FLAG: --cleanup=\"false\"\nI1012 18:20:13.943143       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI1012 18:20:13.943149       1 flags.go:59] FLAG: --config=\"\"\nI1012 18:20:13.943152       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI1012 18:20:13.943162       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI1012 18:20:13.943173       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI1012 18:20:13.943177       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI1012 18:20:13.943181       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI1012 18:20:13.943185       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI1012 18:20:13.943191       1 flags.go:59] FLAG: --feature-gates=\"\"\nI1012 18:20:13.943197       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI1012 18:20:13.943203       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI1012 18:20:13.943208       1 flags.go:59] FLAG: --help=\"false\"\nI1012 18:20:13.943213       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-37-53.us-west-1.compute.internal\"\nI1012 18:20:13.943220       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI1012 18:20:13.943225       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI1012 18:20:13.943230       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI1012 18:20:13.943235       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI1012 18:20:13.943247       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI1012 18:20:13.943251       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI1012 18:20:13.943255       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI1012 18:20:13.943328       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI1012 18:20:13.943336       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI1012 18:20:13.943340       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI1012 18:20:13.943344       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI1012 18:20:13.943348       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI1012 18:20:13.943352       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI1012 18:20:13.943358       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI1012 18:20:13.943365       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI1012 18:20:13.943370       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI1012 18:20:13.943378       1 flags.go:59] FLAG: --log-dir=\"\"\nI1012 18:20:13.943383       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI1012 18:20:13.943388       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI1012 18:20:13.943393       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI1012 18:20:13.943397       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI1012 18:20:13.943401       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI1012 18:20:13.943413       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI1012 18:20:13.943417       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io\"\nI1012 18:20:13.943423       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI1012 18:20:13.943428       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI1012 18:20:13.943432       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI1012 18:20:13.943437       1 flags.go:59] FLAG: --one-output=\"false\"\nI1012 18:20:13.943441       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI1012 18:20:13.943446       1 flags.go:59] FLAG: --profiling=\"false\"\nI1012 18:20:13.943450       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI1012 18:20:13.943456       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI1012 18:20:13.943461       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI1012 18:20:13.943468       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI1012 18:20:13.943473       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI1012 18:20:13.943477       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI1012 18:20:13.943482       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI1012 18:20:13.943487       1 flags.go:59] FLAG: --v=\"2\"\nI1012 18:20:13.943496       1 flags.go:59] FLAG: --version=\"false\"\nI1012 18:20:13.943504       1 flags.go:59] FLAG: --vmodule=\"\"\nI1012 18:20:13.943510       1 flags.go:59] FLAG: --write-config-to=\"\"\nW1012 18:20:13.943522       1 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI1012 18:20:13.943977       1 feature_gate.go:245] feature gates: &{map[]}\nI1012 18:20:13.944499       1 feature_gate.go:245] feature gates: &{map[]}\nE1012 18:20:43.984504       1 node.go:161] Failed to retrieve node info: Get \"https://api.internal.e2e-2c3257e690-19bfa.test-cncf-aws.k8s.io/api/v1/nodes/ip-172-20-37-53.us-west-1.compute.internal\": dial tcp 203.0.113.123:443: i/o timeout\nI1012 18:20:45.098164       1 node.go:172] Successfully retrieved node IP: 172.20.37.53\nI1012 18:20:45.098275       1 server_others.go:140] Detected node IP 172.20.37.53\nW1012 18:20:45.098314       1 server_others.go:565] Unknown proxy mode \"\", assuming iptables proxy\nI1012 18:20:45.098417       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI1012 18:20:45.165437       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI1012 18:20:45.165480       1 server_others.go:212] Using iptables Proxier.\nI1012 18:20:45.165495       1 server_others.go:219] creating dualStackProxier for iptables.\nW1012 18:20:45.165514       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI1012 18:20:45.165692       1 utils.go:370] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI1012 18:20:45.165759       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI1012 18:20:45.165925       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1012 18:20:45.165984       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv4\nI1012 18:20:45.166043       1 proxier.go:281] \"Using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI1012 18:20:45.166090       1 proxier.go:327] \"Iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI1012 18:20:45.166110       1 proxier.go:337] \"Iptables supports --random-fully\" ipFamily=IPv6\nI1012 18:20:45.166366       1 server.go:649] Version: v1.22.2\nI1012 18:20:45.171285       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1012 18:20:45.171354       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1012 18:20:45.171412       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1012 18:20:45.171833       1 config.go:315] Starting service config controller\nI1012 18:20:45.171851       1 shared_informer.go:240] Waiting for caches to sync for service config\nI1012 18:20:45.172529       1 config.go:224] Starting endpoint slice config controller\nI1012 18:20:45.172546       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI1012 18:20:45.178512       1 service.go:301] Service kube-system/kube-dns updated: 3 ports\nI1012 18:20:45.178548       1 service.go:301] Service default/kubernetes updated: 1 ports\nE1012 18:20:45.178925       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"ip-172-20-37-53.us-west-1.compute.internal.16ad5b80031da9b3\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc051925f4a3a00ce, ext:31247774242, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:\"kube-proxy\", ReportingInstance:\"kube-proxy-ip-172-20-37-53\", Action:\"StartKubeProxy\", Reason:\"Starting\", Regarding:v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"ip-172-20-37-53.us-west-1.compute.internal\", UID:\"ip-172-20-37-53.us-west-1.compute.internal\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}, Related:(*v1.ObjectReference)(nil), Note:\"\", Type:\"Normal\", DeprecatedSource:v1.EventSource{Component:\"\", Host:\"\"}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event \"ip-172-20-37-53.us-west-1.compute.internal.16ad5b80031da9b3\" is invalid: involvedObject.namespace: Invalid value: \"\": does not match event.namespace' (will not retry!)\nI1012 18:20:45.273874       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI1012 18:20:45.274021       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1012 18:20:45.274096       1 proxier.go:804] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI1012 18:20:45.273874       1 shared_informer.go:247] Caches are synced for service config \nI1012 18:20:45.274151       1 service.go:416] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI1012 18:20:45.274171       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI1012 18:20:45.274255       1 service.go:416] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI1012 18:20:45.274265       1 service.go:416] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI1012 18:20:45.274396       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI1012 18:20:45.274416       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:20:45.369430       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"95.294057ms\"\nI1012 18:20:45.369472       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:20:45.431970       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"62.493834ms\"\nI1012 18:20:49.280356       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:20:49.318258       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"37.953858ms\"\nI1012 18:20:49.318381       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:20:49.371363       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"53.056846ms\"\nI1012 18:24:22.313874       1 service.go:301] Service proxy-8005/proxy-service-fmtv4 updated: 4 ports\nI1012 18:24:22.313946       1 service.go:416] Adding new service port \"proxy-8005/proxy-service-fmtv4:portname1\" at 100.64.153.244:80/TCP\nI1012 18:24:22.313962       1 service.go:416] Adding new service port \"proxy-8005/proxy-service-fmtv4:portname2\" at 100.64.153.244:81/TCP\nI1012 18:24:22.313974       1 service.go:416] Adding new service port \"proxy-8005/proxy-service-fmtv4:tlsportname1\" at 100.64.153.244:443/TCP\nI1012 18:24:22.313984       1 service.go:416] Adding new service port \"proxy-8005/proxy-service-fmtv4:tlsportname2\" at 100.64.153.244:444/TCP\nI1012 18:24:22.314015       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:22.364596       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.635245ms\"\nI1012 18:24:22.364662       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:22.405046       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"40.398741ms\"\nI1012 18:24:27.895422       1 service.go:301] Service webhook-3850/e2e-test-webhook updated: 1 ports\nI1012 18:24:27.895470       1 service.go:416] Adding new service port \"webhook-3850/e2e-test-webhook\" at 100.65.181.215:8443/TCP\nI1012 18:24:27.895515       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:27.941442       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.947364ms\"\nI1012 18:24:27.941536       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:27.982362       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"40.873626ms\"\nI1012 18:24:29.343063       1 service.go:301] Service webhook-3850/e2e-test-webhook updated: 0 ports\nI1012 18:24:29.343105       1 service.go:441] Removing service port \"webhook-3850/e2e-test-webhook\"\nI1012 18:24:29.343144       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:29.382851       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"39.728377ms\"\nI1012 18:24:30.383012       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:30.418622       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"35.648016ms\"\nI1012 18:24:31.212897       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:31.255311       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.478333ms\"\nI1012 18:24:32.143144       1 service.go:301] Service conntrack-683/svc-udp updated: 1 ports\nI1012 18:24:32.143207       1 service.go:416] Adding new service port \"conntrack-683/svc-udp:udp\" at 100.70.227.38:80/UDP\nI1012 18:24:32.143248       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:32.182907       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"39.693013ms\"\nI1012 18:24:33.103425       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:33.155824       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"52.4555ms\"\nI1012 18:24:37.808325       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:37.847154       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"38.882004ms\"\nI1012 18:24:38.210378       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:38.247681       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"37.348918ms\"\nI1012 18:24:39.406011       1 proxier.go:829] \"Stale service\" protocol=\"udp\" svcPortName=\"conntrack-683/svc-udp:udp\" clusterIP=\"100.70.227.38\"\nI1012 18:24:39.406049       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:39.463005       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"57.094639ms\"\nI1012 18:24:43.433807       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:43.480562       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.773053ms\"\nI1012 18:24:43.493771       1 service.go:301] Service proxy-8005/proxy-service-fmtv4 updated: 0 ports\nI1012 18:24:43.493812       1 service.go:441] Removing service port \"proxy-8005/proxy-service-fmtv4:portname1\"\nI1012 18:24:43.493859       1 service.go:441] Removing service port \"proxy-8005/proxy-service-fmtv4:portname2\"\nI1012 18:24:43.493867       1 service.go:441] Removing service port \"proxy-8005/proxy-service-fmtv4:tlsportname1\"\nI1012 18:24:43.493874       1 service.go:441] Removing service port \"proxy-8005/proxy-service-fmtv4:tlsportname2\"\nI1012 18:24:43.493906       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:43.547379       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"53.549091ms\"\nI1012 18:24:48.221059       1 service.go:301] Service dns-5555/test-service-2 updated: 1 ports\nI1012 18:24:48.221110       1 service.go:416] Adding new service port \"dns-5555/test-service-2:http\" at 100.65.90.221:80/TCP\nI1012 18:24:48.221150       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:48.266171       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"45.056597ms\"\nI1012 18:24:48.266546       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:48.306221       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"39.701115ms\"\nI1012 18:24:52.012751       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:52.059891       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"47.179016ms\"\nI1012 18:24:53.227150       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:53.282542       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"55.420638ms\"\nI1012 18:24:53.282630       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:24:53.333242       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"50.647104ms\"\nI1012 18:25:00.636473       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:00.779369       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"142.951059ms\"\nI1012 18:25:08.729020       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:08.775785       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"46.787357ms\"\nI1012 18:25:08.838539       1 service.go:301] Service conntrack-683/svc-udp updated: 0 ports\nI1012 18:25:08.838597       1 service.go:441] Removing service port \"conntrack-683/svc-udp:udp\"\nI1012 18:25:08.838639       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:08.900046       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"61.441518ms\"\nI1012 18:25:15.868346       1 service.go:301] Service services-9132/affinity-nodeport updated: 1 ports\nI1012 18:25:15.868400       1 service.go:416] Adding new service port \"services-9132/affinity-nodeport\" at 100.67.223.114:80/TCP\nI1012 18:25:15.868439       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:15.905481       1 proxier.go:1283] \"Opened local port\" port=\"\\\"nodePort for services-9132/affinity-nodeport\\\" (:31101/tcp4)\"\nI1012 18:25:15.910438       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"42.028436ms\"\nI1012 18:25:15.910521       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:15.970273       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"59.77793ms\"\nI1012 18:25:17.865814       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:17.934032       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"68.246977ms\"\nI1012 18:25:18.943929       1 service.go:301] Service kubectl-4353/agnhost-primary updated: 1 ports\nI1012 18:25:18.943987       1 service.go:416] Adding new service port \"kubectl-4353/agnhost-primary\" at 100.67.5.97:6379/TCP\nI1012 18:25:18.944031       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:18.976401       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"32.412459ms\"\nI1012 18:25:19.002977       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:19.043622       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"40.67118ms\"\nI1012 18:25:20.804369       1 proxier.go:845] \"Syncing iptables rules\"\nI1012 18:25:20.836647       1 proxier.go:812] \"SyncProxyRules complete\" elapsed=\"32.324301ms\"\nI1012 18:25:22.033767