This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 487 succeeded
Started2022-08-07 20:46
Elapsed26m44s
Revisionmaster

Test Failures


kubetest2 Test 12m51s

exit status 255
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 487 Passed Tests

Show 3658 Skipped Tests

Error lines from build-log.txt

... skipping 167 lines ...
I0807 20:47:46.811826    6146 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0807 20:47:46.813218    6146 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.25.0-alpha.3+v1.25.0-alpha.2-62-g82ec66a033/linux/amd64/kops
I0807 20:47:47.802967    6146 local.go:42] ⚙️ ssh-keygen -t ed25519 -N  -q -f /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519
I0807 20:47:47.816604    6146 up.go:44] Cleaning up any leaked resources from previous cluster
I0807 20:47:47.816716    6146 dumplogs.go:45] /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops toolbox dump --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
I0807 20:47:47.816733    6146 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops toolbox dump --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu
W0807 20:47:48.331057    6146 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0807 20:47:48.331132    6146 down.go:48] /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops delete cluster --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --yes
I0807 20:47:48.331147    6146 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops delete cluster --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --yes
I0807 20:47:48.364606    6177 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io" not found
I0807 20:47:48.789452    6146 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/08/07 20:47:48 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0807 20:47:48.803695    6146 http.go:37] curl https://ip.jsb.workers.dev
I0807 20:47:48.898611    6146 up.go:159] /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops create cluster --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.12 --ssh-public-key /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220706 --channel=alpha --networking=weave --container-runtime=containerd --node-size=t3.large --discovery-store=s3://k8s-kops-prow/discovery --admin-access 34.123.186.206/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0807 20:47:48.898687    6146 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kops/_rundir/f3bffc3d-1691-11ed-bcf2-1217529f69d6/kops create cluster --name e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.12 --ssh-public-key /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220706 --channel=alpha --networking=weave --container-runtime=containerd --node-size=t3.large --discovery-store=s3://k8s-kops-prow/discovery --admin-access 34.123.186.206/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I0807 20:47:48.930952    6188 featureflag.go:164] FeatureFlag "SpecOverrideFlag"=true
I0807 20:47:48.947146    6188 create_cluster.go:862] Using SSH public key: /tmp/kops/e2e-e2e-kops-aws-cni-weave.test-cncf-aws.k8s.io/id_ed25519.pub
I0807 20:47:49.399249    6188 new_cluster.go:1168]  Cloud Provider ID = aws
... skipping 561 lines ...

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:48:37.704223    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:48:47.740532    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:48:57.787230    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:07.825449    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:17.874787    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:27.920597    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:37.960935    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:48.015586    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:49:58.049420    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:08.083695    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:18.119166    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:28.154932    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:38.189698    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:48.228231    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:50:58.260061    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:08.292167    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:18.332364    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:28.380021    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:38.415238    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:48.456127    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:51:58.492655    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:52:08.531240    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:52:18.580995    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0807 20:52:28.619298    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

... skipping 20 lines ...
Pod	kube-system/kube-controller-manager-ip-172-20-58-56.ap-northeast-1.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-58-56.ap-northeast-1.compute.internal" is pending
Pod	kube-system/weave-net-5xqj7								system-node-critical pod "weave-net-5xqj7" is pending
Pod	kube-system/weave-net-bgswz								system-node-critical pod "weave-net-bgswz" is pending
Pod	kube-system/weave-net-khpcr								system-node-critical pod "weave-net-khpcr" is pending
Pod	kube-system/weave-net-qdrnv								system-node-critical pod "weave-net-qdrnv" is pending

Validation Failed
W0807 20:52:42.176277    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

... skipping 12 lines ...
Pod	kube-system/ebs-csi-node-2cgph			system-node-critical pod "ebs-csi-node-2cgph" is pending
Pod	kube-system/ebs-csi-node-jv2qm			system-node-critical pod "ebs-csi-node-jv2qm" is pending
Pod	kube-system/ebs-csi-node-n6h5k			system-node-critical pod "ebs-csi-node-n6h5k" is pending
Pod	kube-system/ebs-csi-node-wcrgl			system-node-critical pod "ebs-csi-node-wcrgl" is pending
Pod	kube-system/weave-net-khpcr			system-node-critical pod "weave-net-khpcr" is pending

Validation Failed
W0807 20:52:54.689088    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

... skipping 21 lines ...
ip-172-20-63-231.ap-northeast-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-53-183.ap-northeast-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-53-183.ap-northeast-1.compute.internal" is pending

Validation Failed
W0807 20:53:19.820096    6228 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-1a	Master	c5.large	1	1	ap-northeast-1a
nodes-ap-northeast-1a	Node	t3.large	4	4	ap-northeast-1a

... skipping 1208 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:01.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-9672" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:02.185: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1576
------------------------------
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:03.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5459" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:03.893: INFO: Only supported for providers [gce gke] (not aws)
... skipping 77 lines ...
Aug  7 20:56:05.056: INFO: AfterEach: Cleaning up test resources.
Aug  7 20:56:05.056: INFO: Deleting PersistentVolumeClaim "pvc-fcjfb"
Aug  7 20:56:05.205: INFO: Deleting PersistentVolume "hostpath-bxbzx"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 108 lines ...
• [SLOW TEST:13.294 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:13.526 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:57
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:14.119: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename init-container
W0807 20:56:02.037853    7178 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Aug  7 20:56:02.037: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Aug  7 20:56:02.338: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:13.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6884" for this suite.


• [SLOW TEST:13.568 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:14.317: INFO: Only supported for providers [gce gke] (not aws)
... skipping 112 lines ...
• [SLOW TEST:14.219 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:530
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  7 20:56:03.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c" in namespace "downward-api-4013" to be "Succeeded or Failed"
Aug  7 20:56:03.188: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Pending", Reason="", readiness=false. Elapsed: 154.626709ms
Aug  7 20:56:05.359: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324828205s
Aug  7 20:56:07.509: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475130752s
Aug  7 20:56:09.659: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625129206s
Aug  7 20:56:11.810: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775786492s
Aug  7 20:56:13.959: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.924910828s
STEP: Saw pod success
Aug  7 20:56:13.959: INFO: Pod "downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c" satisfied condition "Succeeded or Failed"
Aug  7 20:56:14.108: INFO: Trying to get logs from node ip-172-20-63-231.ap-northeast-1.compute.internal pod downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c container client-container: <nil>
STEP: delete the pod
Aug  7 20:56:14.418: INFO: Waiting for pod downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c to disappear
Aug  7 20:56:14.566: INFO: Pod downwardapi-volume-fcf4b043-5946-41e1-883c-e17be23e363c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.254 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:15.032: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Aug  7 20:56:02.696: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7305" to be "Succeeded or Failed"
Aug  7 20:56:02.846: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 150.140208ms
Aug  7 20:56:04.998: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302668378s
Aug  7 20:56:07.150: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454333999s
Aug  7 20:56:09.302: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.60589166s
Aug  7 20:56:11.452: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.756798444s
Aug  7 20:56:13.604: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.90851823s
Aug  7 20:56:15.756: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.059921759s
STEP: Saw pod success
Aug  7 20:56:15.756: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug  7 20:56:15.906: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug  7 20:56:16.618: INFO: Waiting for pod pod-host-path-test to disappear
Aug  7 20:56:16.769: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.473 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:17.254: INFO: Only supported for providers [gce gke] (not aws)
... skipping 70 lines ...
Aug  7 20:56:01.279: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-3ecdc6ca-7a17-458b-b668-36b6b2aaa4c2
STEP: Creating a pod to test consume secrets
Aug  7 20:56:01.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3" in namespace "projected-1148" to be "Succeeded or Failed"
Aug  7 20:56:02.040: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 149.848686ms
Aug  7 20:56:04.192: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301507766s
Aug  7 20:56:06.342: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451891718s
Aug  7 20:56:08.493: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603217954s
Aug  7 20:56:10.644: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753405487s
Aug  7 20:56:12.795: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.904837312s
Aug  7 20:56:14.946: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.055657902s
Aug  7 20:56:17.097: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.207293271s
STEP: Saw pod success
Aug  7 20:56:17.098: INFO: Pod "pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3" satisfied condition "Succeeded or Failed"
Aug  7 20:56:17.248: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug  7 20:56:17.558: INFO: Waiting for pod pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3 to disappear
Aug  7 20:56:17.709: INFO: Pod pod-projected-secrets-a6dbebd5-b3be-4c5e-9f0f-22439f4e53d3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.505 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:18.189: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Aug  7 20:56:13.297: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5" in namespace "security-context-test-8872" to be "Succeeded or Failed"
Aug  7 20:56:13.445: INFO: Pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5": Phase="Pending", Reason="", readiness=false. Elapsed: 147.900746ms
Aug  7 20:56:15.593: INFO: Pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296144471s
Aug  7 20:56:17.741: INFO: Pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.444477336s
Aug  7 20:56:19.894: INFO: Pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5": Phase="Failed", Reason="", readiness=false. Elapsed: 6.597358323s
Aug  7 20:56:19.894: INFO: Pod "busybox-readonly-true-de5194ff-4f59-48e6-9706-82f49afc47c5" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:19.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8872" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:20.250: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
• [SLOW TEST:6.926 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:21.079: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 49 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Aug  7 20:56:15.169: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  7 20:56:15.319: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8bn4
STEP: Creating a pod to test subpath
Aug  7 20:56:15.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8bn4" in namespace "provisioning-1873" to be "Succeeded or Failed"
Aug  7 20:56:15.622: INFO: Pod "pod-subpath-test-inlinevolume-8bn4": Phase="Pending", Reason="", readiness=false. Elapsed: 150.067655ms
Aug  7 20:56:17.777: INFO: Pod "pod-subpath-test-inlinevolume-8bn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304298891s
Aug  7 20:56:19.932: INFO: Pod "pod-subpath-test-inlinevolume-8bn4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459831186s
Aug  7 20:56:22.082: INFO: Pod "pod-subpath-test-inlinevolume-8bn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.609870395s
STEP: Saw pod success
Aug  7 20:56:22.082: INFO: Pod "pod-subpath-test-inlinevolume-8bn4" satisfied condition "Succeeded or Failed"
Aug  7 20:56:22.233: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod pod-subpath-test-inlinevolume-8bn4 container test-container-subpath-inlinevolume-8bn4: <nil>
STEP: delete the pod
Aug  7 20:56:22.548: INFO: Waiting for pod pod-subpath-test-inlinevolume-8bn4 to disappear
Aug  7 20:56:22.699: INFO: Pod pod-subpath-test-inlinevolume-8bn4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8bn4
Aug  7 20:56:22.699: INFO: Deleting pod "pod-subpath-test-inlinevolume-8bn4" in namespace "provisioning-1873"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:23.309: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 230 lines ...
• [SLOW TEST:22.826 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:23.457: INFO: Only supported for providers [gce gke] (not aws)
... skipping 147 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:9.147 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:27.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3639" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:27.869: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:244
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":32,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Aug  7 20:56:24.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:28.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7005" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:28.524: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 50 lines ...
• [SLOW TEST:13.667 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:28.685: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 178 lines ...
• [SLOW TEST:30.359 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:30.629: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 44 lines ...
• [SLOW TEST:30.605 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:31.269: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Aug  7 20:56:29.278: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug  7 20:56:29.278: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-njd5
STEP: Creating a pod to test subpath
Aug  7 20:56:29.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-njd5" in namespace "provisioning-5580" to be "Succeeded or Failed"
Aug  7 20:56:29.581: INFO: Pod "pod-subpath-test-inlinevolume-njd5": Phase="Pending", Reason="", readiness=false. Elapsed: 148.016487ms
Aug  7 20:56:31.730: INFO: Pod "pod-subpath-test-inlinevolume-njd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296388204s
Aug  7 20:56:33.878: INFO: Pod "pod-subpath-test-inlinevolume-njd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44506687s
Aug  7 20:56:36.027: INFO: Pod "pod-subpath-test-inlinevolume-njd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.593452688s
STEP: Saw pod success
Aug  7 20:56:36.027: INFO: Pod "pod-subpath-test-inlinevolume-njd5" satisfied condition "Succeeded or Failed"
Aug  7 20:56:36.177: INFO: Trying to get logs from node ip-172-20-56-74.ap-northeast-1.compute.internal pod pod-subpath-test-inlinevolume-njd5 container test-container-subpath-inlinevolume-njd5: <nil>
STEP: delete the pod
Aug  7 20:56:36.952: INFO: Waiting for pod pod-subpath-test-inlinevolume-njd5 to disappear
Aug  7 20:56:37.100: INFO: Pod pod-subpath-test-inlinevolume-njd5 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-njd5
Aug  7 20:56:37.100: INFO: Deleting pod "pod-subpath-test-inlinevolume-njd5" in namespace "provisioning-5580"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:37.708: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:9.919 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":3,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:37.829: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 72 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Aug  7 20:56:01.506: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug  7 20:56:01.803: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8dzr
STEP: Creating a pod to test atomic-volume-subpath
Aug  7 20:56:01.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8dzr" in namespace "provisioning-1142" to be "Succeeded or Failed"
Aug  7 20:56:02.104: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 149.117729ms
Aug  7 20:56:04.256: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301472572s
Aug  7 20:56:06.406: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45164978s
Aug  7 20:56:08.557: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601979512s
Aug  7 20:56:10.706: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.751280647s
Aug  7 20:56:12.855: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900410867s
... skipping 6 lines ...
Aug  7 20:56:27.915: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Running", Reason="", readiness=true. Elapsed: 25.960278001s
Aug  7 20:56:30.065: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Running", Reason="", readiness=true. Elapsed: 28.110501817s
Aug  7 20:56:32.216: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Running", Reason="", readiness=true. Elapsed: 30.261163637s
Aug  7 20:56:34.367: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Running", Reason="", readiness=true. Elapsed: 32.411799434s
Aug  7 20:56:36.517: INFO: Pod "pod-subpath-test-inlinevolume-8dzr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.562251607s
STEP: Saw pod success
Aug  7 20:56:36.517: INFO: Pod "pod-subpath-test-inlinevolume-8dzr" satisfied condition "Succeeded or Failed"
Aug  7 20:56:36.667: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod pod-subpath-test-inlinevolume-8dzr container test-container-subpath-inlinevolume-8dzr: <nil>
STEP: delete the pod
Aug  7 20:56:36.973: INFO: Waiting for pod pod-subpath-test-inlinevolume-8dzr to disappear
Aug  7 20:56:37.121: INFO: Pod pod-subpath-test-inlinevolume-8dzr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8dzr
Aug  7 20:56:37.121: INFO: Deleting pod "pod-subpath-test-inlinevolume-8dzr" in namespace "provisioning-1142"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:37.907: INFO: Only supported for providers [openstack] (not aws)
... skipping 67 lines ...
Aug  7 20:56:30.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug  7 20:56:31.553: INFO: Waiting up to 5m0s for pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387" in namespace "security-context-4794" to be "Succeeded or Failed"
Aug  7 20:56:31.702: INFO: Pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387": Phase="Pending", Reason="", readiness=false. Elapsed: 148.898124ms
Aug  7 20:56:33.855: INFO: Pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301507048s
Aug  7 20:56:36.004: INFO: Pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450935663s
Aug  7 20:56:38.154: INFO: Pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.601200822s
STEP: Saw pod success
Aug  7 20:56:38.154: INFO: Pod "security-context-22782240-0ae1-46a7-9271-470d71f5c387" satisfied condition "Succeeded or Failed"
Aug  7 20:56:38.303: INFO: Trying to get logs from node ip-172-20-56-74.ap-northeast-1.compute.internal pod security-context-22782240-0ae1-46a7-9271-470d71f5c387 container test-container: <nil>
STEP: delete the pod
Aug  7 20:56:38.613: INFO: Waiting for pod security-context-22782240-0ae1-46a7-9271-470d71f5c387 to disappear
Aug  7 20:56:38.762: INFO: Pod security-context-22782240-0ae1-46a7-9271-470d71f5c387 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.409 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:39.093: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  7 20:56:29.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4" in namespace "projected-7797" to be "Succeeded or Failed"
Aug  7 20:56:29.837: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4": Phase="Pending", Reason="", readiness=false. Elapsed: 149.403844ms
Aug  7 20:56:31.987: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299108528s
Aug  7 20:56:34.137: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449630318s
Aug  7 20:56:36.287: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599892364s
Aug  7 20:56:38.438: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.750354149s
STEP: Saw pod success
Aug  7 20:56:38.438: INFO: Pod "downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4" satisfied condition "Succeeded or Failed"
Aug  7 20:56:38.587: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4 container client-container: <nil>
STEP: delete the pod
Aug  7 20:56:38.896: INFO: Waiting for pod downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4 to disappear
Aug  7 20:56:39.046: INFO: Pod downwardapi-volume-817393be-9ec1-4d78-bca9-7cc42d7e93b4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.556 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:39.375: INFO: Only supported for providers [gce gke] (not aws)
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:517
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:40.876: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 118 lines ...
STEP: Destroying namespace "apply-8353" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 18 lines ...
Aug  7 20:56:25.708: INFO: PersistentVolumeClaim pvc-2qwlh found but phase is Pending instead of Bound.
Aug  7 20:56:27.862: INFO: PersistentVolumeClaim pvc-2qwlh found and phase=Bound (8.757533119s)
Aug  7 20:56:27.862: INFO: Waiting up to 3m0s for PersistentVolume local-g2l8s to have phase Bound
Aug  7 20:56:28.012: INFO: PersistentVolume local-g2l8s found and phase=Bound (149.791689ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-v4kl
STEP: Creating a pod to test exec-volume-test
Aug  7 20:56:28.466: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-v4kl" in namespace "volume-475" to be "Succeeded or Failed"
Aug  7 20:56:28.617: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Pending", Reason="", readiness=false. Elapsed: 150.475404ms
Aug  7 20:56:30.770: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303624948s
Aug  7 20:56:32.924: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.457814435s
Aug  7 20:56:35.075: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608488351s
Aug  7 20:56:37.226: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.759807283s
Aug  7 20:56:39.378: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.911411586s
STEP: Saw pod success
Aug  7 20:56:39.378: INFO: Pod "exec-volume-test-preprovisionedpv-v4kl" satisfied condition "Succeeded or Failed"
Aug  7 20:56:39.528: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod exec-volume-test-preprovisionedpv-v4kl container exec-container-preprovisionedpv-v4kl: <nil>
STEP: delete the pod
Aug  7 20:56:39.836: INFO: Waiting for pod exec-volume-test-preprovisionedpv-v4kl to disappear
Aug  7 20:56:39.986: INFO: Pod exec-volume-test-preprovisionedpv-v4kl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-v4kl
Aug  7 20:56:39.986: INFO: Deleting pod "exec-volume-test-preprovisionedpv-v4kl" in namespace "volume-475"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:41.940: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:44.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7726" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:44.469: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Aug  7 20:56:26.445: INFO: PersistentVolumeClaim pvc-gccgq found but phase is Pending instead of Bound.
Aug  7 20:56:28.595: INFO: PersistentVolumeClaim pvc-gccgq found and phase=Bound (15.201516735s)
Aug  7 20:56:28.595: INFO: Waiting up to 3m0s for PersistentVolume local-j5vjg to have phase Bound
Aug  7 20:56:28.744: INFO: PersistentVolume local-j5vjg found and phase=Bound (149.079874ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-txth
STEP: Creating a pod to test subpath
Aug  7 20:56:29.193: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-txth" in namespace "provisioning-3351" to be "Succeeded or Failed"
Aug  7 20:56:29.342: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 148.995023ms
Aug  7 20:56:31.492: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 2.299565251s
Aug  7 20:56:33.643: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449707238s
Aug  7 20:56:35.793: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 6.6004191s
Aug  7 20:56:37.943: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749994765s
Aug  7 20:56:40.093: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900610516s
Aug  7 20:56:42.245: INFO: Pod "pod-subpath-test-preprovisionedpv-txth": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.052182451s
STEP: Saw pod success
Aug  7 20:56:42.245: INFO: Pod "pod-subpath-test-preprovisionedpv-txth" satisfied condition "Succeeded or Failed"
Aug  7 20:56:42.397: INFO: Trying to get logs from node ip-172-20-63-231.ap-northeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-txth container test-container-volume-preprovisionedpv-txth: <nil>
STEP: delete the pod
Aug  7 20:56:42.745: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-txth to disappear
Aug  7 20:56:42.896: INFO: Pod pod-subpath-test-preprovisionedpv-txth no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-txth
Aug  7 20:56:42.896: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-txth" in namespace "provisioning-3351"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:18.710 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:45.200: INFO: Only supported for providers [openstack] (not aws)
... skipping 85 lines ...
• [SLOW TEST:9.627 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Aug  7 20:56:47.373: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Aug  7 20:56:48.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-669" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Aug  7 20:56:42.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007" in namespace "downward-api-4531" to be "Succeeded or Failed"
Aug  7 20:56:42.440: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007": Phase="Pending", Reason="", readiness=false. Elapsed: 152.353601ms
Aug  7 20:56:44.589: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301613112s
Aug  7 20:56:46.739: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451925987s
Aug  7 20:56:48.889: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601111643s
Aug  7 20:56:51.038: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.750862232s
STEP: Saw pod success
Aug  7 20:56:51.038: INFO: Pod "downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007" satisfied condition "Succeeded or Failed"
Aug  7 20:56:51.187: INFO: Trying to get logs from node ip-172-20-53-183.ap-northeast-1.compute.internal pod downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007 container client-container: <nil>
STEP: delete the pod
Aug  7 20:56:51.499: INFO: Waiting for pod downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007 to disappear
Aug  7 20:56:51.648: INFO: Pod downwardapi-volume-8f6c232c-4ea4-4227-9398-de1caba3e007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.562 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 27 lines ...
Aug  7 20:56:25.301: INFO: PersistentVolumeClaim pvc-tdxdk found but phase is Pending instead of Bound.
Aug  7 20:56:27.454: INFO: PersistentVolumeClaim pvc-tdxdk found and phase=Bound (13.048474153s)
Aug  7 20:56:27.454: INFO: Waiting up to 3m0s for PersistentVolume local-v6pht to have phase Bound
Aug  7 20:56:27.603: INFO: PersistentVolume local-v6pht found and phase=Bound (148.878703ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5d57
STEP: Creating a pod to test subpath
Aug  7 20:56:28.060: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5d57" in namespace "provisioning-9006" to be "Succeeded or Failed"
Aug  7 20:56:28.209: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 149.296638ms
Aug  7 20:56:30.360: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300267905s
Aug  7 20:56:32.510: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450486278s
Aug  7 20:56:34.660: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.600456733s
Aug  7 20:56:36.809: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.74976462s
STEP: Saw pod success
Aug  7 20:56:36.809: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57" satisfied condition "Succeeded or Failed"
Aug  7 20:56:36.958: INFO: Trying to get logs from node ip-172-20-63-231.ap-northeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-5d57 container test-container-subpath-preprovisionedpv-5d57: <nil>
STEP: delete the pod
Aug  7 20:56:37.277: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5d57 to disappear
Aug  7 20:56:37.426: INFO: Pod pod-subpath-test-preprovisionedpv-5d57 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5d57
Aug  7 20:56:37.426: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5d57" in namespace "provisioning-9006"
STEP: Creating pod pod-subpath-test-preprovisionedpv-5d57
STEP: Creating a pod to test subpath
Aug  7 20:56:37.730: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5d57" in namespace "provisioning-9006" to be "Succeeded or Failed"
Aug  7 20:56:37.880: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 149.336307ms
Aug  7 20:56:40.034: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303217506s
Aug  7 20:56:42.184: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453173985s
Aug  7 20:56:44.335: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604506177s
Aug  7 20:56:46.484: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.753636802s
STEP: Saw pod success
Aug  7 20:56:46.484: INFO: Pod "pod-subpath-test-preprovisionedpv-5d57" satisfied condition "Succeeded or Failed"
Aug  7 20:56:46.633: INFO: Trying to get logs from node ip-172-20-63-231.ap-northeast-1.compute.internal pod pod-subpath-test-preprovisionedpv-5d57 container test-container-subpath-preprovisionedpv-5d57: <nil>
STEP: delete the pod
Aug  7 20:56:46.956: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5d57 to disappear
Aug  7 20:56:47.106: INFO: Pod pod-subpath-test-preprovisionedpv-5d57 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5d57
Aug  7 20:56:47.106: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5d57" in namespace "provisioning-9006"
... skipping 97 lines ...
Aug  7 20:56:13.353: INFO: Creating resource for dynamic PV
Aug  7 20:56:13.353: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-9387gnn5p
STEP: creating a claim
STEP: Expanding non-expandable pvc
Aug  7 20:56:13.803: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Aug  7 20:56:14.105: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:16.405: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:18.405: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:20.412: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:22.405: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:24.408: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:26.409: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:28.408: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-9387gnn5p",
  	... // 3 identical fields
  }

Aug  7 20:56:30.404: INFO: Error updating pvc awsfd8x7: PersistentVolumeClaim "awsfd8x7" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... sk