This job view page is being replaced by Spyglass soon. Check out the new job view.
PRbwagner5: Dependencies for Toolbox Instance Selector
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-07-02 14:24
Elapsed26m18s
Revision8e761f8da03e649e1ccab25b4ca95aa133b0b59f
Refs 9477
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/96ce344e-bd7b-4ef3-b815-961bf3118c0c/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/96ce344e-bd7b-4ef3-b815-961bf3118c0c/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 545 lines ...
2020/07/02 14:41:05 process.go:155: Step '/workspace/get-kube.sh' finished in 1m21.143534853s
2020/07/02 14:41:05 process.go:153: Running: /workspace/kops get clusters e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io

cluster not found "e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io"
2020/07/02 14:41:06 process.go:155: Step '/workspace/kops get clusters e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io' finished in 1.120316814s
2020/07/02 14:41:06 util.go:42: curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2020/07/02 14:41:06 kops.go:505: failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
2020/07/02 14:41:06 util.go:68: curl https://ip.jsb.workers.dev
2020/07/02 14:41:06 kops.go:430: Using external IP for admin access: 35.184.110.2/32
2020/07/02 14:41:06 process.go:153: Running: /workspace/kops create cluster --name e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-1a --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.0-beta.2 --admin-access 35.184.110.2/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes
I0702 14:41:06.905209    8397 featureflag.go:156] FeatureFlag "SpecOverrideFlag"=true
I0702 14:41:07.112580    8397 create_cluster.go:1438] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub
W0702 14:41:08.068446    8397 channel.go:298] unable to parse kops version "pull-258dc577f"
... skipping 23 lines ...
I0702 14:41:16.283986    8397 keypair.go:223] Issuing new certificate: "apiserver-aggregator"
I0702 14:41:16.296624    8397 keypair.go:223] Issuing new certificate: "kubelet"
I0702 14:41:16.342708    8397 keypair.go:223] Issuing new certificate: "kube-proxy"
I0702 14:41:16.367593    8397 keypair.go:223] Issuing new certificate: "kubecfg"
I0702 14:41:17.694740    8397 executor.go:103] Tasks: 66 done / 86 total; 18 can run
I0702 14:41:19.529263    8397 executor.go:103] Tasks: 84 done / 86 total; 2 can run
W0702 14:41:21.045225    8397 executor.go:128] error running task "AutoscalingGroup/master-eu-west-1a.masters.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io" (9m58s remaining to succeed): error creating AutoscalingGroup: ValidationError: You must use a valid fully-formed launch template. Value (masters.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name
	status code: 400, request id: f5b1ea60-12fc-4dac-a538-a64dab89a32e
W0702 14:41:21.045274    8397 executor.go:128] error running task "AutoscalingGroup/nodes.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io" (9m58s remaining to succeed): error creating AutoscalingGroup: ValidationError: You must use a valid fully-formed launch template. Value (nodes.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name
	status code: 400, request id: b84a3fbc-e62a-4415-a0d4-0d4eb868f703
I0702 14:41:21.045296    8397 executor.go:143] No progress made, sleeping before retrying 2 failed task(s)
I0702 14:41:31.045484    8397 executor.go:103] Tasks: 84 done / 86 total; 2 can run
I0702 14:41:33.200108    8397 executor.go:103] Tasks: 86 done / 86 total; 0 can run
I0702 14:41:33.200152    8397 dns.go:155] Pre-creating DNS records
I0702 14:41:34.542962    8397 update_cluster.go:280] Exporting kubecfg for cluster
kops has set your kubectl context to e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io

... skipping 8 lines ...

2020/07/02 14:41:35 process.go:155: Step '/workspace/kops create cluster --name e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones eu-west-1a --master-size c5.large --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.0-beta.2 --admin-access 35.184.110.2/32 --cloud aws --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes' finished in 28.35342503s
2020/07/02 14:41:35 process.go:153: Running: /workspace/kops validate cluster e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io --wait 15m
I0702 14:41:35.296866    8424 featureflag.go:156] FeatureFlag "SpecOverrideFlag"=true
Validating cluster e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io

W0702 14:41:37.081337    8424 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:41:47.184172    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:41:57.263533    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
W0702 14:42:07.359726    8424 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:42:17.456390    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:42:27.539623    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:42:37.611785    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:42:47.659123    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:42:57.726561    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:07.816088    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:17.903064    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:27.992477    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:38.179586    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:48.302669    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:43:58.392486    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:08.447711    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:18.618075    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:28.704984    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:38.772835    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:48.838029    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:44:58.919260    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:45:08.985989    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:45:19.104162    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:45:29.176597    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:45:39.233362    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
W0702 14:45:49.296425    8424 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:45:59.366388    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:09.431458    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:19.515829    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:29.605901    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:39.659233    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:49.722035    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:46:59.791810    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:47:09.888834    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0702 14:47:19.959567    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 6 lines ...
Machine	i-01e03df0ffe577910			machine "i-01e03df0ffe577910" has not yet joined cluster
Machine	i-03afc1d88dc07f999			machine "i-03afc1d88dc07f999" has not yet joined cluster
Machine	i-05347f6bd46d0cedf			machine "i-05347f6bd46d0cedf" has not yet joined cluster
Machine	i-09a3e5b70a1c3c40a			machine "i-09a3e5b70a1c3c40a" has not yet joined cluster
Pod	kube-system/kube-dns-677d5df9b4-fr2pw	system-cluster-critical pod "kube-dns-677d5df9b4-fr2pw" is pending

Validation Failed
W0702 14:47:32.853688    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 9 lines ...
Machine	i-05347f6bd46d0cedf				machine "i-05347f6bd46d0cedf" has not yet joined cluster
Node	ip-172-20-48-53.eu-west-1.compute.internal	node "ip-172-20-48-53.eu-west-1.compute.internal" is not ready
Node	ip-172-20-61-124.eu-west-1.compute.internal	node "ip-172-20-61-124.eu-west-1.compute.internal" is not ready
Pod	kube-system/kube-dns-677d5df9b4-fr2pw		system-cluster-critical pod "kube-dns-677d5df9b4-fr2pw" is pending
Pod	kube-system/kube-dns-677d5df9b4-ndxcc		system-cluster-critical pod "kube-dns-677d5df9b4-ndxcc" is pending

Validation Failed
W0702 14:47:44.448479    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 10 lines ...
Node	ip-172-20-38-177.eu-west-1.compute.internal	node "ip-172-20-38-177.eu-west-1.compute.internal" is not ready
Node	ip-172-20-48-53.eu-west-1.compute.internal	node "ip-172-20-48-53.eu-west-1.compute.internal" is not ready
Node	ip-172-20-61-124.eu-west-1.compute.internal	node "ip-172-20-61-124.eu-west-1.compute.internal" is not ready
Pod	kube-system/kube-dns-677d5df9b4-fr2pw		system-cluster-critical pod "kube-dns-677d5df9b4-fr2pw" is pending
Pod	kube-system/kube-dns-677d5df9b4-ndxcc		system-cluster-critical pod "kube-dns-677d5df9b4-ndxcc" is pending

Validation Failed
W0702 14:47:55.935110    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-38-177.eu-west-1.compute.internal	node "ip-172-20-38-177.eu-west-1.compute.internal" is not ready
Pod	kube-system/kube-dns-677d5df9b4-fr2pw		system-cluster-critical pod "kube-dns-677d5df9b4-fr2pw" is pending
Pod	kube-system/kube-dns-677d5df9b4-ndxcc		system-cluster-critical pod "kube-dns-677d5df9b4-ndxcc" is pending

Validation Failed
W0702 14:48:07.567486    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/kube-dns-677d5df9b4-fr2pw	system-cluster-critical pod "kube-dns-677d5df9b4-fr2pw" is not ready (kubedns)
Pod	kube-system/kube-dns-677d5df9b4-ndxcc	system-cluster-critical pod "kube-dns-677d5df9b4-ndxcc" is pending

Validation Failed
W0702 14:48:19.275328    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 6 lines ...
ip-172-20-61-124.eu-west-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-34-251.eu-west-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-34-251.eu-west-1.compute.internal" is pending

Validation Failed
W0702 14:48:30.955801    8424 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-1a	Master	c5.large	1	1	eu-west-1a
nodes			Node	t3.medium	4	4	eu-west-1a

... skipping 650 lines ...
[sig-storage] In-tree Volumes
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 95 lines ...
STEP: Destroying namespace "services-811" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:812

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:40.353: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 132 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:49:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-414" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 46 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:49:41.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3506" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:49:39.196: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename containers
Jul  2 14:49:39.639: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Jul  2 14:49:39.971: INFO: Waiting up to 5m0s for pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e" in namespace "containers-8802" to be "Succeeded or Failed"
Jul  2 14:49:40.080: INFO: Pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e": Phase="Pending", Reason="", readiness=false. Elapsed: 108.9819ms
Jul  2 14:49:42.193: INFO: Pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22115564s
Jul  2 14:49:44.427: INFO: Pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455461998s
Jul  2 14:49:46.544: INFO: Pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.572692761s
STEP: Saw pod success
Jul  2 14:49:46.544: INFO: Pod "client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e" satisfied condition "Succeeded or Failed"
Jul  2 14:49:46.653: INFO: Trying to get logs from node ip-172-20-34-251.eu-west-1.compute.internal pod client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e container test-container: <nil>
STEP: delete the pod
Jul  2 14:49:46.896: INFO: Waiting for pod client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e to disappear
Jul  2 14:49:47.005: INFO: Pod client-containers-f0b898e7-8c4f-4f73-ace6-ae3224aaa74e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.149 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:107
STEP: Creating a pod to test downward API volume plugin
Jul  2 14:49:41.383: INFO: Waiting up to 5m0s for pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767" in namespace "downward-api-3271" to be "Succeeded or Failed"
Jul  2 14:49:41.501: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767": Phase="Pending", Reason="", readiness=false. Elapsed: 117.819724ms
Jul  2 14:49:43.616: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232579453s
Jul  2 14:49:45.777: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39336925s
Jul  2 14:49:47.896: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512248638s
Jul  2 14:49:50.034: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.650133895s
STEP: Saw pod success
Jul  2 14:49:50.034: INFO: Pod "metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767" satisfied condition "Succeeded or Failed"
Jul  2 14:49:50.140: INFO: Trying to get logs from node ip-172-20-61-124.eu-west-1.compute.internal pod metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767 container client-container: <nil>
STEP: delete the pod
Jul  2 14:49:50.366: INFO: Waiting for pod metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767 to disappear
Jul  2 14:49:50.475: INFO: Pod metadata-volume-f36a2c28-e323-46ff-975e-e88c88a4e767 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.462 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:107
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:50.811: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jul  2 14:49:41.691: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating secret secrets-2129/secret-test-9faade5d-74f5-4bbe-9d8e-045a8c640c31
STEP: Creating a pod to test consume secrets
Jul  2 14:49:42.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc" in namespace "secrets-2129" to be "Succeeded or Failed"
Jul  2 14:49:42.239: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.702014ms
Jul  2 14:49:44.351: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221333926s
Jul  2 14:49:46.467: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33760136s
Jul  2 14:49:48.584: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453912204s
Jul  2 14:49:50.691: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.561429875s
STEP: Saw pod success
Jul  2 14:49:50.691: INFO: Pod "pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc" satisfied condition "Succeeded or Failed"
Jul  2 14:49:50.803: INFO: Trying to get logs from node ip-172-20-48-53.eu-west-1.compute.internal pod pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc container env-test: <nil>
STEP: delete the pod
Jul  2 14:49:51.043: INFO: Waiting for pod pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc to disappear
Jul  2 14:49:51.151: INFO: Pod pod-configmaps-50c58c10-6a1d-464b-bb4e-d21492075bbc no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.975 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:51.491: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Jul  2 14:49:42.957: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b" in namespace "security-context-test-9726" to be "Succeeded or Failed"
Jul  2 14:49:43.064: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 105.351597ms
Jul  2 14:49:45.304: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344973641s
Jul  2 14:49:47.417: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458156761s
Jul  2 14:49:49.525: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.56616442s
Jul  2 14:49:51.633: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.673610597s
Jul  2 14:49:51.633: INFO: Pod "alpine-nnp-false-428cf88d-14d3-4e16-8126-e2b0c369c88b" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:49:51.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9726" for this suite.


... skipping 2 lines ...
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-windows] DNS
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul  2 14:49:52.019: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 114 lines ...
[sig-storage] In-tree Volumes
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 58 lines ...
STEP: creating execpod-noendpoints on node ip-172-20-34-251.eu-west-1.compute.internal
Jul  2 14:49:40.790: INFO: Creating new exec pod
Jul  2 14:49:49.169: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ip-172-20-34-251.eu-west-1.compute.internal
Jul  2 14:49:49.170: INFO: Running '/home/prow/go/src/k8s.io/kops/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops363865862/kubeconfig exec --namespace=services-6307 execpod-noendpoints6f7t6 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jul  2 14:49:52.513: INFO: rc: 1
Jul  2 14:49:52.513: INFO: error contained 'REFUSED', as expected: error running /home/prow/go/src/k8s.io/kops/kubernetes/platforms/linux/amd64/kubectl --server=https://api.e2e-e3278a4a23-ff1eb.test-cncf-aws.k8s.io --kubeconfig=/tmp/kops363865862/kubeconfig exec --namespace=services-6307 execpod-noendpoints6f7t6 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:49:52.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6307" for this suite.
[AfterEach] [sig-network] Services
... skipping 20 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:49:52.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4356" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:53.125: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 53 lines ...
• [SLOW TEST:15.176 seconds]
[sig-network] Services
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1059
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:54.649: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
Jul  2 14:49:47.367: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jul  2 14:49:48.095: INFO: Waiting up to 5m0s for pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a" in namespace "downward-api-8305" to be "Succeeded or Failed"
Jul  2 14:49:48.204: INFO: Pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.969009ms
Jul  2 14:49:50.313: INFO: Pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217782895s
Jul  2 14:49:52.433: INFO: Pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337538684s
Jul  2 14:49:54.542: INFO: Pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447004287s
STEP: Saw pod success
Jul  2 14:49:54.542: INFO: Pod "downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a" satisfied condition "Succeeded or Failed"
Jul  2 14:49:54.652: INFO: Trying to get logs from node ip-172-20-34-251.eu-west-1.compute.internal pod downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a container dapi-container: <nil>
STEP: delete the pod
Jul  2 14:49:54.888: INFO: Waiting for pod downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a to disappear
Jul  2 14:49:54.996: INFO: Pod downward-api-4cfa1bf4-260c-4f59-9c36-71695293e54a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.852 seconds]
[sig-node] Downward API
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:55.249: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 381 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:49:59.495: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 130 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:02.632: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 73 lines ...
• [SLOW TEST:8.869 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:04.290: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 210 lines ...
      Driver local doesn't support ntfs -- skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:49:52.758: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 88 lines ...
Jul  2 14:49:40.260: INFO: Warning: Environment does not support getting controller-manager metrics
STEP: creating a test aws volume
Jul  2 14:49:41.059: INFO: Successfully created a new PD: "aws://eu-west-1a/vol-08c4a40da00e354d1".
Jul  2 14:49:41.059: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-tl6s
STEP: Creating a pod to test exec-volume-test
Jul  2 14:49:41.176: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-tl6s" in namespace "volume-2726" to be "Succeeded or Failed"
Jul  2 14:49:41.287: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 110.758823ms
Jul  2 14:49:43.405: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228375322s
Jul  2 14:49:45.519: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342686569s
Jul  2 14:49:47.643: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466407262s
Jul  2 14:49:49.761: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584577817s
Jul  2 14:49:51.876: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699319249s
Jul  2 14:49:53.994: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818058257s
Jul  2 14:49:56.111: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.934896151s
Jul  2 14:49:58.224: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Pending", Reason="", readiness=false. Elapsed: 17.047644608s
Jul  2 14:50:00.345: INFO: Pod "exec-volume-test-inlinevolume-tl6s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.168356827s
STEP: Saw pod success
Jul  2 14:50:00.345: INFO: Pod "exec-volume-test-inlinevolume-tl6s" satisfied condition "Succeeded or Failed"
Jul  2 14:50:00.459: INFO: Trying to get logs from node ip-172-20-48-53.eu-west-1.compute.internal pod exec-volume-test-inlinevolume-tl6s container exec-container-inlinevolume-tl6s: <nil>
STEP: delete the pod
Jul  2 14:50:00.703: INFO: Waiting for pod exec-volume-test-inlinevolume-tl6s to disappear
Jul  2 14:50:00.816: INFO: Pod exec-volume-test-inlinevolume-tl6s no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-tl6s
Jul  2 14:50:00.816: INFO: Deleting pod "exec-volume-test-inlinevolume-tl6s" in namespace "volume-2726"
Jul  2 14:50:01.119: INFO: Couldn't delete PD "aws://eu-west-1a/vol-08c4a40da00e354d1", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08c4a40da00e354d1 is currently attached to i-01e03df0ffe577910
	status code: 400, request id: 401769a5-415e-475f-97a4-ace16d667cf6
Jul  2 14:50:06.660: INFO: Successfully deleted PD "aws://eu-west-1a/vol-08c4a40da00e354d1".
Jul  2 14:50:06.660: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:50:06.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 7 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should allow exec of files on the volume
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:06.913: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 90 lines ...
Jul  2 14:49:57.998: INFO: PersistentVolumeClaim pvc-95tlz found but phase is Pending instead of Bound.
Jul  2 14:50:00.113: INFO: PersistentVolumeClaim pvc-95tlz found and phase=Bound (6.460001176s)
Jul  2 14:50:00.113: INFO: Waiting up to 3m0s for PersistentVolume local-fnmbg to have phase Bound
Jul  2 14:50:00.227: INFO: PersistentVolume local-fnmbg found and phase=Bound (114.340792ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lbzj
STEP: Creating a pod to test subpath
Jul  2 14:50:00.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lbzj" in namespace "provisioning-1702" to be "Succeeded or Failed"
Jul  2 14:50:00.697: INFO: Pod "pod-subpath-test-preprovisionedpv-lbzj": Phase="Pending", Reason="", readiness=false. Elapsed: 117.401776ms
Jul  2 14:50:02.815: INFO: Pod "pod-subpath-test-preprovisionedpv-lbzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235832955s
Jul  2 14:50:04.929: INFO: Pod "pod-subpath-test-preprovisionedpv-lbzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.349922044s
STEP: Saw pod success
Jul  2 14:50:04.929: INFO: Pod "pod-subpath-test-preprovisionedpv-lbzj" satisfied condition "Succeeded or Failed"
Jul  2 14:50:05.041: INFO: Trying to get logs from node ip-172-20-34-251.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-lbzj container test-container-volume-preprovisionedpv-lbzj: <nil>
STEP: delete the pod
Jul  2 14:50:05.275: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lbzj to disappear
Jul  2 14:50:05.387: INFO: Pod pod-subpath-test-preprovisionedpv-lbzj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lbzj
Jul  2 14:50:05.387: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lbzj" in namespace "provisioning-1702"
... skipping 27 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should support non-existent path
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:09.950: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 87 lines ...
Jul  2 14:49:58.535: INFO: PersistentVolumeClaim pvc-sspdr found but phase is Pending instead of Bound.
Jul  2 14:50:00.647: INFO: PersistentVolumeClaim pvc-sspdr found and phase=Bound (8.609965374s)
Jul  2 14:50:00.647: INFO: Waiting up to 3m0s for PersistentVolume local-tv8z6 to have phase Bound
Jul  2 14:50:00.764: INFO: PersistentVolume local-tv8z6 found and phase=Bound (116.56606ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7s8f
STEP: Creating a pod to test subpath
Jul  2 14:50:01.104: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7s8f" in namespace "provisioning-7352" to be "Succeeded or Failed"
Jul  2 14:50:01.220: INFO: Pod "pod-subpath-test-preprovisionedpv-7s8f": Phase="Pending", Reason="", readiness=false. Elapsed: 116.292444ms
Jul  2 14:50:03.331: INFO: Pod "pod-subpath-test-preprovisionedpv-7s8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227263608s
Jul  2 14:50:05.443: INFO: Pod "pod-subpath-test-preprovisionedpv-7s8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339187522s
Jul  2 14:50:07.565: INFO: Pod "pod-subpath-test-preprovisionedpv-7s8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.461131387s
STEP: Saw pod success
Jul  2 14:50:07.566: INFO: Pod "pod-subpath-test-preprovisionedpv-7s8f" satisfied condition "Succeeded or Failed"
Jul  2 14:50:07.680: INFO: Trying to get logs from node ip-172-20-61-124.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7s8f container test-container-volume-preprovisionedpv-7s8f: <nil>
STEP: delete the pod
Jul  2 14:50:07.984: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7s8f to disappear
Jul  2 14:50:08.096: INFO: Pod pod-subpath-test-preprovisionedpv-7s8f no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7s8f
Jul  2 14:50:08.096: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7s8f" in namespace "provisioning-7352"
... skipping 25 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should support non-existent path
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:50:12.006: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:50:04.552: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:50:13.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1872" for this suite.


• [SLOW TEST:9.116 seconds]
[sig-apps] Job
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":3,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:13.687: INFO: Only supported for providers [gce gke] (not aws)
... skipping 58 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:49:40.605: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
Jul  2 14:49:42.656: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/host-path
W0702 14:49:42.771411    8786 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  2 14:49:42.771: INFO: Warning: Environment does not support getting controller-manager metrics
Jul  2 14:49:43.009: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2072" in namespace "provisioning-2072" to be "Succeeded or Failed"
Jul  2 14:49:43.123: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 114.272941ms
Jul  2 14:49:45.300: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290758851s
Jul  2 14:49:47.419: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409461114s
Jul  2 14:49:49.531: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522121167s
Jul  2 14:49:51.656: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646332863s
Jul  2 14:49:53.772: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763195813s
Jul  2 14:49:55.888: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 12.878403573s
Jul  2 14:49:57.998: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 14.989124034s
Jul  2 14:50:00.109: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 17.099583735s
Jul  2 14:50:02.222: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 19.213228695s
Jul  2 14:50:04.332: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.322784105s
STEP: Saw pod success
Jul  2 14:50:04.332: INFO: Pod "hostpath-symlink-prep-provisioning-2072" satisfied condition "Succeeded or Failed"
Jul  2 14:50:04.332: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2072" in namespace "provisioning-2072"
Jul  2 14:50:04.448: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2072" to be fully deleted
Jul  2 14:50:04.556: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-p79q
STEP: Creating a pod to test subpath
Jul  2 14:50:04.672: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-p79q" in namespace "provisioning-2072" to be "Succeeded or Failed"
Jul  2 14:50:04.781: INFO: Pod "pod-subpath-test-inlinevolume-p79q": Phase="Pending", Reason="", readiness=false. Elapsed: 108.862381ms
Jul  2 14:50:06.904: INFO: Pod "pod-subpath-test-inlinevolume-p79q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232417946s
Jul  2 14:50:09.015: INFO: Pod "pod-subpath-test-inlinevolume-p79q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.343398071s
STEP: Saw pod success
Jul  2 14:50:09.015: INFO: Pod "pod-subpath-test-inlinevolume-p79q" satisfied condition "Succeeded or Failed"
Jul  2 14:50:09.125: INFO: Trying to get logs from node ip-172-20-38-177.eu-west-1.compute.internal pod pod-subpath-test-inlinevolume-p79q container test-container-subpath-inlinevolume-p79q: <nil>
STEP: delete the pod
Jul  2 14:50:09.384: INFO: Waiting for pod pod-subpath-test-inlinevolume-p79q to disappear
Jul  2 14:50:09.495: INFO: Pod pod-subpath-test-inlinevolume-p79q no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-p79q
Jul  2 14:50:09.495: INFO: Deleting pod "pod-subpath-test-inlinevolume-p79q" in namespace "provisioning-2072"
STEP: Deleting pod
Jul  2 14:50:09.609: INFO: Deleting pod "pod-subpath-test-inlinevolume-p79q" in namespace "provisioning-2072"
Jul  2 14:50:09.828: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2072" in namespace "provisioning-2072" to be "Succeeded or Failed"
Jul  2 14:50:09.938: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 109.714908ms
Jul  2 14:50:12.052: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223823836s
Jul  2 14:50:14.163: INFO: Pod "hostpath-symlink-prep-provisioning-2072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.335327985s
STEP: Saw pod success
Jul  2 14:50:14.163: INFO: Pod "hostpath-symlink-prep-provisioning-2072" satisfied condition "Succeeded or Failed"
Jul  2 14:50:14.163: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2072" in namespace "provisioning-2072"
Jul  2 14:50:14.283: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2072" to be fully deleted
Jul  2 14:50:14.392: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jul  2 14:50:14.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 7 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should support existing single file [LinuxOnly]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:14.641: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1089
------------------------------
... skipping 46 lines ...
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:22.005: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 46 lines ...
Jul  2 14:50:01.701: INFO: PersistentVolumeClaim pvc-q8bn9 found and phase=Bound (6.441528115s)
Jul  2 14:50:01.701: INFO: Waiting up to 3m0s for PersistentVolume nfs-52hvl to have phase Bound
Jul  2 14:50:01.810: INFO: PersistentVolume nfs-52hvl found and phase=Bound (108.711948ms)
STEP: Checking pod has write access to PersistentVolume
Jul  2 14:50:02.028: INFO: Creating nfs test pod
Jul  2 14:50:02.139: INFO: Pod should terminate with exitcode 0 (success)
Jul  2 14:50:02.139: INFO: Waiting up to 5m0s for pod "pvc-tester-4bw6p" in namespace "pv-6076" to be "Succeeded or Failed"
Jul  2 14:50:02.249: INFO: Pod "pvc-tester-4bw6p": Phase="Pending", Reason="", readiness=false. Elapsed: 110.395159ms
Jul  2 14:50:04.359: INFO: Pod "pvc-tester-4bw6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219806011s
Jul  2 14:50:06.465: INFO: Pod "pvc-tester-4bw6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326066049s
Jul  2 14:50:08.576: INFO: Pod "pvc-tester-4bw6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43747773s
Jul  2 14:50:10.685: INFO: Pod "pvc-tester-4bw6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546259886s
Jul  2 14:50:12.796: INFO: Pod "pvc-tester-4bw6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65694495s
STEP: Saw pod success
Jul  2 14:50:12.796: INFO: Pod "pvc-tester-4bw6p" satisfied condition "Succeeded or Failed"
Jul  2 14:50:12.796: INFO: Pod pvc-tester-4bw6p succeeded 
Jul  2 14:50:12.796: INFO: Deleting pod "pvc-tester-4bw6p" in namespace "pv-6076"
Jul  2 14:50:12.912: INFO: Wait up to 5m0s for pod "pvc-tester-4bw6p" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul  2 14:50:13.020: INFO: Deleting PVC pvc-q8bn9 to trigger reclamation of PV 
Jul  2 14:50:13.020: INFO: Deleting PersistentVolumeClaim "pvc-q8bn9"
... skipping 23 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:22.116: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 112 lines ...
[sig-storage] In-tree Volumes
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  2 14:50:12.434: INFO: >>> kubeConfig: /tmp/kops363865862/kubeconfig
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-6d8e9f56-e9c1-45d7-a023-35f492a2bcad
STEP: Creating a pod to test consume secrets
Jul  2 14:50:13.224: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a" in namespace "projected-7688" to be "Succeeded or Failed"
Jul  2 14:50:13.338: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a": Phase="Pending", Reason="", readiness=false. Elapsed: 114.062192ms
Jul  2 14:50:15.459: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235501789s
Jul  2 14:50:17.576: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352757847s
Jul  2 14:50:19.688: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464691511s
Jul  2 14:50:21.809: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.585653162s
STEP: Saw pod success
Jul  2 14:50:21.811: INFO: Pod "pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a" satisfied condition "Succeeded or Failed"
Jul  2 14:50:21.924: INFO: Trying to get logs from node ip-172-20-34-251.eu-west-1.compute.internal pod pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  2 14:50:22.172: INFO: Waiting for pod pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a to disappear
Jul  2 14:50:22.283: INFO: Pod pod-projected-secrets-e6ff9c8a-4b84-411b-b363-2d85ad7dca8a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.083 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul  2 14:49:59.544: INFO: PersistentVolumeClaim pvc-6f7lx found but phase is Pending instead of Bound.
Jul  2 14:50:01.654: INFO: PersistentVolumeClaim pvc-6f7lx found and phase=Bound (8.557939129s)
Jul  2 14:50:01.654: INFO: Waiting up to 3m0s for PersistentVolume local-b9675 to have phase Bound
Jul  2 14:50:01.764: INFO: PersistentVolume local-b9675 found and phase=Bound (110.400188ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8cd2
STEP: Creating a pod to test subpath
Jul  2 14:50:02.108: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8cd2" in namespace "provisioning-7679" to be "Succeeded or Failed"
Jul  2 14:50:02.219: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 111.168663ms
Jul  2 14:50:04.329: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220456094s
Jul  2 14:50:06.437: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329007419s
Jul  2 14:50:08.548: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440036074s
Jul  2 14:50:10.677: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568916545s
Jul  2 14:50:12.788: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67970924s
Jul  2 14:50:14.900: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791866097s
Jul  2 14:50:17.012: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.904249788s
Jul  2 14:50:19.123: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.015043909s
Jul  2 14:50:21.232: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.123850138s
Jul  2 14:50:23.345: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.23636283s
STEP: Saw pod success
Jul  2 14:50:23.345: INFO: Pod "pod-subpath-test-preprovisionedpv-8cd2" satisfied condition "Succeeded or Failed"
Jul  2 14:50:23.453: INFO: Trying to get logs from node ip-172-20-61-124.eu-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-8cd2 container test-container-subpath-preprovisionedpv-8cd2: <nil>
STEP: delete the pod
Jul  2 14:50:23.712: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8cd2 to disappear
Jul  2 14:50:23.824: INFO: Pod pod-subpath-test-preprovisionedpv-8cd2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8cd2
Jul  2 14:50:23.824: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8cd2" in namespace "provisioning-7679"
... skipping 20 lines ...
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:115
      should support readOnly directory specified in the volumeMount
      /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:116
Jul  2 14:50:25.541: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 77 lines ...
/workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
    should support port-forward
    /workspace/anago-v1.19.0-beta.1.269+e7ca64fbe16d0c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":2,"skipped":25,"failed":0}

SSS{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2020-07-02T14:50:28Z"}